5. Created the Intranet and My Site web applications on Port 80
For our test environment I specified these settings for the Default zone: host headers that include the Active Directory FQDN, classic mode authentication, Negotiate (Kerberos), No for allow anonymous access, No for SSL, and of course, different database names for the content databases.
A note on SSL: TechNet: Configure Project Server 2010 to work with Exchange Server 2010 notes that Project Server 2010 uses SSL “to access Exchange Server and must trust the SSL certificate that was used by the Exchange farm.” With certificates issued by a trusted authority (e.g. VeriSign, Thawte, etc.) this is automatic but otherwise there is a certificate export/import step required. Too, additional Exchange-side configuration may be required in an Exchange Organization with more than one Exchange 2010 server. See Configure Exchange Server 2010 Impersonation. History here: MSFT Senior Support Escalation Engineer Brian Smith’s blog post Project Server 2010 and Exchange Integration – a couple of early issues resolved, which links to http://blogs.msdn.com/b/mohits/archive/2010/05/29/integration-of-project-server-2010-and-exchange-2010-2007.aspx and http://www.tincupsandstring.com/2010/05/12/exchange-2010-and-project-server-2010-integration/.
Host headers and Kerberos
I wondered about configuring the host headers given that SharePoint is configured to use Kerberos. The initial desire was to use a host header that matches our external domain name (testnet.com), for which we also have an internal DNS zone, but which does not match our AD domain name (us.testnet.com). We do this for some other applications; users access the apps internally and externally from the same URL. Outlook Web Access is a prime example. But with SharePoint configured to use Kerberos, does a web application’s host header need to match the host: port section of the application pool service account’s SPN in order for Kerberos to work? For example, for our Intranet web application this would require the host header be intranet or intranet.us.testnet.com rather than the desired intranet.testnet.com. So I asked the question in the SharePoint 2010 - Setup, Upgrade, Administration and Operation TechNet forum and an MSFT moderator replied that he believed the host header FQDN must be consistent with the AD domain name. Ergo, the host headers use the AD FQDN.
6. Created the Intranet site collection
I chose the Publishing Portal template as opposed to the Enterprise Wiki template because the top-level site will mostly be read-only. To give all employees Read access I added our employees-only AD group to the Intranet Visitors group. The Authenticated Users group membership includes accounts we want excluded so I did not use that group.
7. Created the Usage and health data collection service application using the Farm Wizard
As explained in Prep Work Part 1: (1) I made sure to specify the service account I created to use with the Wizard because the account used when the Wizard is first run will be hard-coded into the Wizard. I also made sure to uncheck all other service applications so that they would not also be created. (2) The Wizard created the service application as “Usage and Health data collection” rather than “WSS_UsageApplication”.
The Farm Wizard is no longer necessary now that I know how to fix a stopped proxy. Instead, I can use PowerShell to create the service application and then provision the stopped proxy to start it.
8. Configured Usage and Health data collection and Diagnostic Logging
Usage and Health data collection
To configure Usage & Health data collection I drilled down from Monitoring --> Reporting --> Configure web analytics and health data collection. I left the Usage data collection log file location at the default (the SharePoint Root aka “14 Hive” LOGS subfolder) and the maximum log file size 1GB. Clicking on the “Health Logging Schedule” link brought me to a filtered list of timer job definitions, with the View filtered on “Service” (the other choices are “All” and “Web Applications”) and the Service filtered on “Microsoft SharePoint Foundation Timer”. I counted 26 job definitions in the filtered list. Incidentally, from that filtered job definitions page I jumped directly back to the Central Admin home page using the top breadcrumb, then navigated back down (Monitoring --> Timer Jobs --> Review Timer Definitions) to the timer job definitions page and found the list still filtered. I thought this odd, as I would have expected the list would default to the “All” view. I had to manually reset the View to “All” to again see all timer job definitions.
Clicking on the “Log Collection Schedule” brought me to a list of job definitions with the View filtered on “Service” and the Service filtered on “Microsoft SharePoint Foundation Usage”. There were 2 jobs: Usage Data Import and Usage Data Processing. That filtered view also “stuck” regardless of how I re-navigated to the timer job definitions page. So if this happens to you – you navigate down to the timer job definitions page and don’t see all 157 jobs over 2 pages – look to see if the View is filtered.
To configure Diagnostic Logging, I drilled down from Monitoring --> Reporting --> Configure Diagnostic Logging. I set the Event Log level to “Error” and the Trace Log level to “High” for all categories. These settings are sufficient for our purposes. When troubleshooting an issue I set the related categories to verbose, which is the maximum possible output. When done I return the values to our desired default settings. I also enabled Event Log Flood Protection, set the number of days to store Trace Log files to 7. One week seems reasonable and we can always set up an archiving schedule to retain a history. ["Archiving" can be as simple as XCopy to another storage location.] I checked to restrict Trace Log disk storage and set the maximum disk storage space to 5 GB. This is reasonable and affordable based on overall disk storage capacity allotted to the .vhd (virtual hard drive), which I configured as a fixed disk size.
9. Created the State Service Application using PowerShell
You must have a State Service Application. Take a look at the first paragraph in TechNet: Manage the State Service (SharePoint 2010). It is one of the first service applications I create, the others being Search, Usage and Health data collection, and the Secure Store. [Note: the TechNet article mentions that the State Service is “automatically configured as part of the Basic installation of SharePoint 2010.” I assume that “Basic” refers to the “Standalone” option that no one chooses unless they are creating a demo or development box, and that “Advanced installation” means the Server Farm, Complete option.]
The State Service Application can be created using the Farm Wizard or PowerShell. It is not available in the Manage Service Applications, New menu. PowerShell is three easy steps: (1) Create the service application. (2) Create the database and associate it to the newly created service application. (3) Create a service application proxy, associate it to the service application and add it to the default proxy group. The necessary commands are spelled out in the TechNet link above, under “To Configure the State Service by using Windows PowerShell”. A complete list of cmdlets for the State Service is here: TechNet: State Service and session state cmdlets (SharePoint 2010).
Here’s a screenshot of the cmdlets being run in the order cited above and the resultant output (the words wrap): Note that no service account is specified.
10. Created the Secure Store Service Application
I did this using the Manage Service Applications, New menu, creating a new application pool (SecureStoreAppPool) and specifying the designated managed account (wsssecurestore).
Side rant 1: I have been using underscores in database names, only because this seems to be a common practice. But going forward underscores are banished in favor of names with each word capitalized, e.g., SecureStoreServiceDB. Think of an underscore in a URL; a hyperlink is usually underlined, making the underscore hard to see. Personally, I think it interferes with Accessibility.
Side rant 2: Spaces in URLs are another sore point because of the %20 padding. “This is a document” ends up as “This%20is%20a%20document”. Try to read that. Ugh. Plus the extra characters add to the path length. We have had issues in WSS 3.0 because of path length, both with SharePoint-stored documents and with links to file server files. I stopped using spaces in favor of dashes almost immediately after installing WSS 3.0, but getting users to stop is <place your favorite rant here>. One dash per one or any number of spaces between words.
After creating the Secure Store service application, I generated and then refreshed the Secure Store key. [TechNet: Configure the Secure Store Service (SharePoint 2010)] And backed up the encryption key, which going forward should be done each time the key is refreshed. [TechNet: Plan the Secure Store Service (SharePoint 2010)]
11. Created a Content Type Hub site collection
This was done now in order to have a URL to specify when creating the Managed Metadata Service Application. The MMS will publish the Hub site collection’s content types to the web applications it services.
Following SharePoint MVP Wictor Wilen's advice in his blog post Plan your SharePoint 2010 Content Type Hub carefully, I created a separate site collection for the Hub. I used the /sites managed path and the Team Site template. I then enabled the Content Type Hub Syndication feature for the site collection. There is no “Content Type Hub” template. What makes the site collection a Content Type Hub site collection is enabling the “Content Type Syndication Hub” feature.
(1) Deciding the paths for certain elemental site collections, such as the BI Center, Enterprise Search Center, Content Type Hub, has been an ongoing thought process. For production, we will use an explicit path directly under the Intranet root to simplify the URL.
(2) Since the hub site has one purpose, a simple site template is sufficient. Additional lists, libraries, and features can always be added later if desired. But then, why not the Blank Site template? Mr. Wilen pointed out that the Blank Site template “does not have the Taxonomy feature stapled upon it” and I knew I wanted Taxonomy even if I was clueless that features got stapled. When I had a bit of time to look into the TaxonomyFeatureStapler feature Mr. Wilen mentioned, I came across Using Managed Metadata in a Blank Site by SharePoint MVP Paul Papanek Stork, who discovered that the Taxonomy feature is not present in a top-level site created with the Blank Site template (though it is present in a subsite created with the template as long as the site collection’s top-level site was created using a template that does have the Taxonomy feature stapled to it). Mr. Stork did a bit of investigating to find out what and why, which he explains in his blog post.
Some time later, after discovering the ULS Log Viewer, I noticed an error in the logs that repeated every 15 minutes:
Troubleshooting took quite a long time because I was clueless about changeTokens and change logs. The word “token” conjures up the long gone but will-never-be-forgotten NYC subway tokens and an old Dilbert comic. [In case the Dilbert link doesn’t work, search http://www.dilbert.com/. The comic is dated May 2, 1996 and is about token ring and ethernet.]
A search turned up some information about the error with regard to MOSS and full vs. incremental content deployment, which I did not think applied in this case. Not that I was familiar with Content Deployment (more on this later) but hey, the site collection had no content. “Failed to process hub site” sounds fatal (doesn’t it?), and since the site was empty - I had not had time to work on it - I deleted the site collection and recreated it at the same path. I deleted/recreated because I thought that once the Hub path is specified in an MMS service application’s settings it cannot be changed for that particular MMS service application. I subsequently found out it can, using PowerShell. See http://www.sharepointanalysthq.com/2010/11/how-to-change-the-content-type-hub-url/. If you do change the path using PowerShell there are some cleanup steps, such as republishing all content types out again and updating the service connection as described in the post.
After recreating the site collection the error went away. And eventually came back. Focusing on the error's message I did some more research and found this (don’t know how I missed it first go-round): MOSS - Common Issue - Incremental deployment fails with "The changeToken refers to a time before the start of the current change log" by MSFT SharePoint and MCMS Senior Escalation Engineer Stefan Goβner. Cause # C is “No changes have happened on the source server for a long time.” Remove the phrase “on the source server” and you are left with “no changes have happened for a long time.” That fit. Mr. Goβner cited two possible solutions: “Increase the timespan the Change Log should be preserved” or “Ensure that at least one item is modified within the configured timespan.” I added a site column; that’s a change, isn’t it? I spot checked the ULS logs for about a month but did not see the error. However, it eventually came back.
Sometime later I decided to reinvestigate. I found the Change Log settings are in Monitoring Job Definition ---> Review Job Definitions. A Change Log timer job exists for each web application and the job is used to delete old entries in the Change Log. The schedule choices are Minutes, Hourly, Daily, Weekly, and Monthly. I found ours set to weekly every Saturday at 11 PM even though TechNet: Timer Job Reference says the default is Daily.
I also found another, recent article by Mr. Goβner, again for MOSS: Interesting ChangeToken problem when mixing complete and selective deployment. And this: MSDN: Overview of the Change Log which says “By default, entries in the change log expire after 60 days.” Too, I learned that “change tokens are specific to a list, web site, site collection, or content database.” [MSDN: How to: Save or Restore a Change Token]
And I learned that “Content Deployment” is a feature to copy content from a source site collection to a destination site collection. Blog posts discussing this issue in MOSS refer to cross-farm deployment, e.g. moving content from a test farm to production farm or old farm to new farm. For SP2010 I found this information: MSDN: Plan Content Deployment (SharePoint 2010), MSFT: Content Deployment poster, and TechNet: Content Deployment Overview. Content Deployment settings in Central Admin are under General Application Settings.
However, I was no closer to finding a solution to make the error go away. If the cause was indeed indicative of an inability to do an incremental content deployment job, others have resolved the issue by redoing a full content deployment job, after which they could then resume incremental jobs. But *I* did not do a full content deployment, ever, that I remember. So if I did not do it, and SharePoint did it, where is the technical information on “SharePoint does full and incremental content deployment jobs for Content Type Hubs”? And what timer job is it? And how do I kick off a full deployment, e.g. like I can do for User Profile synchronization? I see the “Content Type Hub” job; this must be the job that generates the error because it runs every 15 minutes and the description is “tracks content type log maintenance”. (Genius deduction) I see “Content Type Subscriber” jobs, one for each web application, which run every hour; the description is “Retrieves content type packages from the hub and applies them to the local content type gallery.” I don’t see a “kick off a full content deployment” option.
Approaching the error from a different angle, I thought, “Maybe the issue is related to the category, which is Taxonomy”. So I upped the logging on Taxonomy to verbose, fired up the ULS Log Viewer, and discovered that the error is related to the “Metadata Hub timer job”. And that immediately after the “Failed to process hub site http://intranet.testnet.com/sites/cthub” error is a “Processed hub site http://intranet.testnet.com/sites/cthub” information message. Here is the whole sequence:
I had posted the error to the MSFT SharePoint 2010 Setup, Upgrade, and Administration forum and a reply from one of the moderators was that perhaps the failed job was left from the deleted hub site. But I could not find any related failed jobs in the timer job history or a job definition listed for the deleted hub site. Did I miss looking somewhere?
The error continues but a success event immediately after always follows, and publishing a content type has succeeded down to the point where it is available for consumption in the Intranet site collection.
I’ll end Part 2 here and pick up in the next article with creating the Managed Metadata Service Application.