SharePoint 2013 Enterprise Social Applications: Part 2( The Suitebar Control)

This is the part 2 of the Enterprise social applications: Here is a link to the part 1

The Suitebar Control: The suitebar control is the core for Enterprise Social. The controls pushes information to the user from disparate data sources . The suitebar control engages the users and keeps them informed about social activity, tasks that needs to be completed, new ideas generated or unanswered questions. You can control and customize the links and notifications that go into the suitebar control. Add the suitebar to the masterpage of your SharePoint or Asp.Net website so that it is “follows” the user. SharePoint 2013 already has a suitebar control, thus, you can just add the links and javascript controls to the existing suitebar control.

Capture1 

Pic1: Suitebar control: The left side of the suite bar is a news scroll rss feed from cnn.com, you can customize it to any other RSS feed or control.The right side of the suitebar are a bunch of links with dynamic notifications. Click the picture above to see full screen

1. Tasks: All Active tasks from your SharePoint site  and the red notification tells the user the number of active tasks, clicking on the link will take them to the MySites Tasks page, where the user can complete the tasks. The tasks are configured automatically by the SharePoint Work management service. If you do not know how to configure the Work Management Service visit the page. http://blogs.technet.com/b/praveenh/archive/2013/08/22/work-management-service-application.aspx

2. Knowledge Base: I will be covering the Knowledge base in a separate article, but the link here can take you to your KB Home page. You can show/hide the data notifications.

3. Blogs : Part 4 of this series will show you the blogs page solution. The notification show the top 10 blogs posted over the last 5 days. Clicking the blogs link will take you the Blogs home page.

4. Ideas: If you have a idea management solution, you can surface the top ideas in the suitebar, if you have a voting mechanism, you can add a data notification for the ideas that need your vote. This will be covered in a separate articles as well.

5. Company Feed. The company feed is your main public newsfeed. Clicking on the newsfeed will open a popup with the newsfeed webpart, wherein a user can add or reply to new posts. The data notification is a count of the last 5 days news feed posts for the logged in user.

6. One Drive and Sites. These are the OOB links that come with SharePoint 2013 newsfeed. The sites data notification show all the sites the user is following.

If you application doesn’t contain the suitebar, then you will have to add a div control at the top of your page and then call the REST API using Jquery.

To get the count of newsfeed posts for example, you can


function getNewsfeedCount(){
var todaysDate = new Date();
todaysDate.setDate(todaysDate.getDate() - 1);
var isodate = todaysDate.toISOString();
var url = "https://www.contoso.com/_api/social.feed/my/timelinefeed(MaxThreadCount=10,NewerThan=@v)?@v='"+isodate+"'";
jQuery.ajax({
url: url,
method: "GET",
headers: { "Accept": "application/json; odata=verbose" },
success: function (data) {
unreadNewsfeedCount = data.d.SocialFeed.Threads.results.length;
},
error: function (data) {
failure(data);
}
});

}

Once you get the count, you can prepend this to the newsfeed link. Do the same by calling other REST API for Social and Search and append it to the existing newsfeed links.


$('a#ctl00_ctl59_ShellNewsfeed').prepend('');

Adding this type of notification will ensure that the notification badges will show up each time the user loads your intranet site or while one is working inside a document library. On more information on the CSS of the data notification refer this link

You can further improve this functionality by  integrating 3rd party applications or using the JQUery popup when the user clicks on the newsfeed link, and showing them the newsfeed results. If you are using SharePoint you can use the SP.UI.ModalDialog control to show the newsfeed in a dialog window.

Potential Issues: If you are using this control on a single SharePoint farm, then everything will be dandy, however, if you are trying to do a cross domain pull , then you need to look for part 4 of this series, where i explain how to enable CORS preflight and a GPO setting for older browsers(IE7,IE8) for the suitebar control to work.

After you implement the above solution successfully, you should see a suitebar control on top of the website. I have implemented this solution on a single sharepoint 2013 farm and a sharepoint 2010 /2007 farm pulling data from sharepoint 2013 rest api. It works very well.

Performance impact of page load. The suitebar control itself has hardly any performance impact of the page load time , however, you need to make sure to run the warm script for the sharepoint 2013 farm where the REST API exsists. This will ensure the red notification show up instantly . I am currently working on implementing a custom cache solution for the suitebar control to further improve the performance.

.

SharePoint 2013 Enterprise Social Applications: Part 1 (Introduction)

Introduction

The goal of writing this post is to showcase the possibilities for Social and FAST search capabilities in enabling Enterprise social across your company. SharePoint 2013 REST api enables us to develop dynamic websites using JQuery. This also enables other applications to talk and use the social features using oData+JQuery to GET or POST  information to the social newsfeed. Implementing enterprise social has many hurdles the most prominent being the lack of relevant data in the newsfeed. Another issue that most companies face when deciding to implement social, is to show the ROI to the leadership and convince them that the goal is not to implement facebook or instagram for the employees but to leverage social features to get their work done. So in the following social posts, i would not talk about the benefits of social , if you still want to see those benefits click here🙂. Part 1 of this series is going to be a lot of generic information, if you want to get to the applications , jump to Part 2.

So you had your warning, here we go!!!

SharePoint Social vs Yammer

In this solution i have used the SharePoint Social instead of yammer for the following reasons.

1. A true enterprise social system should do more than newsfeed posts or communities, it should be able to integrate into other systems to get data and display it to the users. In about 80% of the cases,  applications reside inside your data centers and information can be accessed securely using SOAP API when connecting to (MS Project, Outlook, Dynamics, Documentum or other C# or Java solutions developed internally) and you are able to customize it using an On-Prem SharePoint solution.

2. If you use SharePoint for document collaboration, there are more integration features within SharePoint and SharePoint social (like following sites, documents, people)

3. The most obvious reason, if you do not have Yammer in your environment, then you are bound to use SharePoint social.

This post is my version of how a social intranet and collaboration site would look like and work, so use your discretion and experience to choose to implement the entire solution or parts of it.

Lets see the current issues that we face with Enterprise Social

Issue1. Navigating users from intranet to a social website. Most users would be willing to participate and reap benefits if you have a website that integrates social features. So just adding a newsfeed webpart to you home page or setting up another webapplication for social will NOT work.

Solution:  Microsoft kinda solved this issue with the SuiteBar control, however, it is not enough as it just acts as a navigation bar with links. The suitebar that we will develop will be a dynamic bar with notifications. You can add this Suitebar to any of your SharePoint WebApplications or .Net WebSites. The suitebar will utilize Jquery and REST API to GET and display data.

Issue 2.  As you enable , Social Newsfeed, Tasks, Blogs, Communities, Knowledge base, Idea Management solutions in your farm, you would expect the users to travel to all these sites to get information and you will see a steady drop on usage, as users do not appreciate going to different websites only to find no new updates.

Solution.  Consider Push Architecture rather than Pull. If you have to enable all these features as described above, you should consider pushing information to the users rather than making the users pull information by going to these websites. So how do we solve this, well add a iphone style notification on the suitebar for latest updates from all these different sites. Secondly, we can aggregate as much information using Jquery or a custom webpart(for ex. Aggregation of Blogs, or Aggregation of unanswered questions in communities).

Issue 3. Generic news blast to employees vs Subscription based news updates. Most companies prefer to send a weekly blast of all the updates to the users irrespective if the news applies to them or not.

Solution: We will create a subscription based framework with SharePoint hashtags and Event Receivers, that integrates into your intranet and gives the users the ability to subscribe to the news channel of their interest and when the news is posted, the subscribed users will get an update in their newsfeed/iphone newsfeed app.

Issue 4.  Rewards and Recognition or Badges. Currently there is very little support out of the box for gamification inside SharePoint 2013 limited only to communities, however, this feature can and should be used outside of communities, for ex Task completion, metadata usage across sites, idea generation, contribution to knowledge base or peer – peer recognition.

I will try to cover these in the following series.

  1. Create a suitebar control with dynamic notification using JQuery and REST API
  2. Create a framework through which users can subscribe/unsubscribe to news channels(using #tags)
  3. Security and CORS for Cross Domain.
  4. Badges framework beyond communities
  5. Idea Management and Knowledge base solution.

-Kartik

SharePoint: Adventures with the REST API Part 1

SharePoint 2013 Rest API

View original post 2,393 more words

SharePoint 2013 and List View Threshold

If you run a GB or a TB sized SharePoint farm in 2010 or 2013 you should have encountered the famous SharePoint List View Threshold problem. It is actually sad , especially when we have multiple database options SQL, NoSQL, S3, Hadoop, memcache that are capable of tackling much more complex issues, we still face this issue with SharePoint(more particularly SQL Server in this case) and the fact that its not a new issue it has been there in 2010 and is still an issue with 2013.  Unfortunately, there is no perfect solution to this issue, except some workarounds.

Problem: I am not going to go into detail about the issue at hand, if you have found this link, you already know what the issue is or you have seen this image below.

1

In short, if you have a list with a view showing more than 5000 items with filters or sort, it will throw the error above. There are other reasons as well related to the number of lookup fields in the list or a list doing a lookup to a another list that has exceeded the threshold.

The point below is something that most of them propose including Microsoft, if you are looking for something more meaningful skip and go to solutions.

 The useless workarounds. So this workaround is Microsoft’s way of teasing and saying that “we give you an option to re-mediate the issue but strongly advice to not use because there will be severe performance related issues”. Now if you are a site with more than 5000 to 10,000 users actively using SharePoint then this is not a solution for you. If you have a global SharePoint presence and have users connecting and collaborating on large lists , this is not an option for you. So i would say this is an option for a small farm more like a sandbox farm.

So if you still go ahead and use this workaround, i would advice to look at the picture below to understand what you are doing to your farm, before switching the values. As you keep increasing the View threshold, the requests per second will decrease dramatically and will start creating SQL Table locks and timeouts for long running operations. One specific case that i have seen is workflow pause and run will most certainly fail. Content Query webparts bringing in large content will fail.

2

If you want to still go ahead and do it, Most of the throttles and limits that are covered in this section can be configured in Central Administration by going to Manage Web Applications, and selecting General Settings – Resource Throttling from the ribbon for a particular Web application

Workarounds: Lets spend more time on the “actual workaround(s)”.  If you are looking for a script or setting to modify that would fix this issue still keeping the performance intact then i am afraid , the answer is no, AFAIK, i dont think there is a solution for this issue. There are only workarounds at this point of writing.

1a. Modifying the View filters and Indexing. IMO this is the best available solution. Its a bit tedious and definitely will receive a lot of backlash from your users, but you will have to pick the lesser of the 2 evils.

Firstly create an All items view with no filters or sorts, this will ensure at least users are able to look at the data and use the search box to find data.

Secondly, on thew view, make a list of all the columns being used as filters and then login as an admin and add these to as column indexes, make sure to not add all columns as indexes, only the required ones.

Now in the view, select the indexed column as the filter. This should do it, the view should now be working. If the view still doesn’t work go to 1b.

1b. Modifying the view filters and row limit for the first filter. If you have followed 1a and your view still doesn’t work, then your filters are still bring in results more than 5000. Check the first filter, if this brings more than 5000 and you have AND filters then this wouldn’t work. (I know its frustrating). So make sure your first filter gets less than 5000 results and the remaining results further reduce the results.

2. Search driven content . In SharePoint 2013, with fast search and content search webparts, we are able to get the information that the users need. Understand the requirements, is the user using the view to just view the historical information in the view or are they actively adding information based on the results from thew view.

In search you have a few options

1. Content search. Under the content roll up, you can bring the data from your list or multiple lists and libraries and display the information in the view using property mappings and appropriate display templates. This webpart will get you what you are looking for in terms of showing data.

2. Search driven content: This section has a few web parts that will display things like popular items, recommended items, items matching a tag etc, use this to your advantage to display this data to the users.

3. Object Model to get items in the view. You can use the SharePoint object model to view the data in the list as well. It is explained very well, in this post here. http://sharepoint.stackexchange.com/questions/28446/how-to-get-all-items-in-a-view-using-client-object-model-javascript

These are few techniques i have used over the years apart from using the JSOM to manipulate the list data to show data in KPI’s and other dashboards.

If you have any other ways to tackle this issues, post it in the comments.

SharePoint 2013: Newsfeed not Showing Video preview thumbnails

As most of you know , SP2013 Newsfeed provides a cool feature wherin you get a Video thumbnail and an embedded player for links that you post from youtube. In most of the cases it works, but in some cases wherein you use a proxy server, this functionality doesn’t work. I tried to look at documentation on technet, but as usual nothing.

Solution:

So there are 2 stages to resolve this issues

1. Application Pool account needs access to the Internet

Login to the WFE’s servers on your SP2013 Farm and then put in the proxy information, so that the account is able to access the  internet. Test it by visiting, https://youtube.com. Repeat this step on all WFE’s. This resolves part of the error and you are now able to connect to gdata.youtube.com to get thumnail information and SP Attachement class is now able to build the complete JSON object. However, the video thumbnail will still not work because the SP Farm needs to trust the website using the *.google.com certificate

2. Enable Trust between SPFarm and Youtube.

On one of the WFE server, visit https://youtube.com and on the IE Browser, click on the lock icon in the address bar and download the certification(by clicking on details–>Copy to File, accept the defaults and store it on the local computer of your WFE server with yourfilename.cer).

Once you have this cert go to Central Administration–>Security–>Manage Trust, click new, give it some name and browse to the cer file you just downloaded and click OK.

Once this is done, go to you cer file on your local computer, double click it, Install Certificate and just follow the default prompt, the import will add the Certificate in your trusted root. Make sure to use Local Computer and not local user.

Repeat Step 1 and 2 on all of your WFE servers in the SP Farm.

Go back to your site and now the video embedded URL’s should be working.

-K

SharePoint 2013: Search Results ,Last Modified/Changed showing wrong dates (Regional Settings).

One of the few challenges working on a central SharePoint farm with global users accessing and creating SharePoint sites is the localization of content and settings.

As you are already aware about the language translation limitations when it comes to SharePoint(in plain words it sucks). However i noticed a weird issue regarding the date formats.

So take an  instance where you have a central SharePoint farm with US as the regional settings in SharePoint(General settings, Site regional settings and Server region settings), however, when someone changes these settings for a single site collection and change the regional settings from US to say UK,or say a document properties have the last modified/change date in ddmmyyyy format, however SharePoint would be expecting a date format in mmddyyyy format.  Luckily the Modified date(SharePoint column is able to understand this correctly and show the appropriate date format. However, search goes crazy and starts showing the incorrect date format.

For ex, if you have the modified date(in the document properties) as ddmmyyyy format as 11/05/2014, when search crawls this documents it covers this into mmddyyyy which would be November 05 2014, which is incorrect, it should be May 11th 2014, and there by all of your recently changed webparts, the refinement panel filter for date, your date sorts on content search webparts get modified. Now you might say, “Duh!!, Well just change the regional settings on the Enterprise Search server and the site collection regional settings to fix this issue”, however this is not the solution, because other site collection in the farm might be still using the mmddyyyy format.

 

So what is the freaking solution already, you ask!!

Option1. 

1. Well, Here it is, Go the SharePoint CA–> Search Service application–>Search Schema–> Managed Property–> Search for the LastModifiedTime property.

2. Click on it to modify and then scroll down to Mapping to Crawled property section, Select the “Include content from the first crawled property that is not empty, based on the specified order” radio button and move the ows_modified property to the top, this will ensure search will always get the Modified Date column information instead of the document property(modified date) property. Click OK and then run Full Crawl.

Now hopefully you should see the correct dates appearing on the search results, webparts, the refinement panel date should be working, etc.

 

Well if it still doesnt work, there is still hope.

Option2.

There is a managed property called “ModifiedOWSDATE” with a crawled property called ows_q_DATE_Modified, if you are modifying the search xslt, then you can use this property however keep in mind of type conversion, this managed property is of a field type TEXT.

However, in most cases the option1 should work without any issues.

If you want to see before making these changes to see if the managed properties are returning the right dates, you can add the search content webparts and pick diagonistic for the site that is having the issue, to make sure that is it returning the right value.

 

Hope it helps !!

-K

 

SharePoint 2013 Distributed Cache: Boon or Bane.

SharePoint 2013’s Distributed Cache is one of my favorite feature of SharePoint 2013. Finally SP2013 moving into the realm of memory based storage for faster access(in lines of MemCache). There are many SharePoint features that use Distributed Cache, like

All of the Social Features

Authentication,

One Note Client Access

Security Trimming and Page Load performance.

By default distributed cache assigns 10% or the total memory of the server. (Thus do not run Excel Services or Search Services on the same server). If you need to add additional memory


Update-SPDistributedCachesize -CacheSizeInMB 

If a cache host’s server utilization reached 95% it will throttle itself by no longer accepting new requests until the utilization falls below 70%

Despite all these features and benefits, it come along with many issues for SharePoint administrators. I would point out some of the issues i have faced overtime.

Issue 1. “CacheHostinfo is null”.

When you run , Stop-SPDistributedServiceInstance -Graceful or Remove-SPDistributedServiceInstance commands.

Resolution, this script below usually fixes the issue.

$SPFarm = Get-SPFarm

$cacheClusterName = "SPDistributedCacheCluster_" + $SPFarm.Id.ToString()

$cacheClusterManager = [Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheClusterInfoManager]::Local $cacheClusterInfo = $cacheClusterManager.GetSPDistributedCacheClusterInfo($cacheClusterName);

$instanceName ="SPDistributedCacheService Name=AppFabricCachingService" $serviceInstance = Get-SPServiceInstance | ? {($_.Service.Tostring()) -eq $instanceName -and ($_.Server.Name) -eq $env:computername}

$serviceInstance.Delete()

Remove-SPDistributedCacheServiceInstance

Add-SPDistributedCacheServiceInstance (run this command after 1 or 2 minutes).

So usually the above fixes the issues with the cache node, that is down. If not move to the next step.

Issue 2. When you run “Use-CacheCluster” you get an error
“ErrorCode<ERRPS002>:SubStatus<ES0001>:Invalid provider and connection string read. Please provide the values manually.”

Well, you might be thinking what a clusterF*#(, but all this means is the cluster  is unable to find the connection string information that is stored in the config database.

So where doesn’t it read this information from. Well, 2 places(actually only 1 place, but acc to MSFT, it looks for 2 places).

a. HKLM\Software\Microsoft\AppFabric\V1.0\Configuration

Connection String:Data Source=<<SQL Server Name or Alias>>;Initial Catalog=<<SPConfigDatabase>>;Integrated Security=True;Enlist=False

Provider: SPDistributedCacheClusterProvider.

Typically you should not add this information manually the Add-SPDistributedCacheServiceInstance add this information to the registry, but if it doesn’t, and you need to unregister a cache host, go ahead and add this information.

Once you add this, now you can start using Use-CacheCluster

2. The second place is in the C:\Program Files\AppFabric 1.1 for Windows Server  DistributedCacheService.exe.config file.

<dataCacheConfig cacheHostName=”AppFabricCachingService”>
<log location=”C:\ProgramData\Microsoft\AppFabric\Runtime” logLevel=”-1″ />
<clusterConfig provider=”SPDistributedCacheClusterProvider” connectionString=”Data Source=<<SQL Server Name or Alias>>;Initial Catalog=<<SPConfigDatabase>>;Integrated Security=True;Enlist=False” />
</dataCacheConfig>

However, it doesn’t matter, the Use-Cache cluster always look for the information in the registry key above.

Issue 3. alright, You tried all the above, but still you get the cachehostinfo error and the distributed cache service is not working. Well, we got to get the big guns out.

Lets unregister and register the cache host.

1. So this is assuming you have atleast one cache host that is running in the cluster apart from the troubled cachehost.

we would unregister the troubled cachehost and re-add it to the cluster.


Use-CacheCluster
Get-CacheHost
Unregister-CacheHost -HostName [machine] -ProviderType SPDistributedCacheClusterProvider -ConnectionString 
One this is done, Run the following command
Add-SPDistributedCacheServiceInstance 

This should eventually fix all the issues.
Get-SPServiceInstance | ? {($_.service.tostring()) -eq “SPDistributedCacheService Name=AppFabricCachingService”} | select Server, Status
Now you should see the Distributed Cache service Online on your cache hosts. If it says disabled, run the commands in Issue 1 to bring the services online.

Issue 4. GetCacheHost one of the cache host is down.

When you run the GetCacheHost, you might see one of the nodes being down. The reason is because the app fabric service(services.msc) is not running. Although you are tempted to just start the service and run Get-CacheHost and it will say the node is UP, however, the distributed cache service will be stopped or Error on Starting. Follow the Steps in Issue 1 to fix this .

Issue 5: DataLoss, cannot see my newsfeed posts on the Following or Everyone tab. What Happened??

So there are 3 scenarios.

1. You restarted your server, without doing a Stop-SPDistributedCacheServiceInstance -Graceful , when this happens the feed information is lost and not transffered to the other cache host and since Newsfeed are stored in memory on the server, when you restart it, the feeds will be lost.

The best practice is to restart your host one at a time and before you restart the server, run the following commands

Stop-SPDistributedCacheServiceInstance -Graceful

Remove-SPDistributedCacheServiceInstance

and upon startup, run the command below to bring the instance back online without loosing your newsfeed information.

Add-SPDistributedCacheServiceInstance

2. The amount of memory on the Distributed Cache Server( or WFE running the Distributed Cache Server) has hit the high watermark level and when this happens, DistributedCache would start evicting the posts from memory.

3. The distributed cache service crashed.

Issue 6. When you run the Add-SPDistributedCacheServiceInstance, you get an error about port 22233 or 22234 or 22236 etc is in use.

Run the commands in Issue 1, till the Remove-SPDistributedCacheServiceInstance command. Now type

netstat -a -b and check the ports above, make sure no service is using it. Also the distributed Cache service should not be running anymore. If it still it, stop the service in CA or stop the App Fabric service and run the Netstat -a -b command again to ensure, no services are using that port. Now run the Add-SPDistributedCacheServiceInstance and ensure the DistributedCache service is running in Central Management–> Manager Services on the server. Also, make sure your firewall is not blocking these ports.

SharePoint 2013 AlternateCSSUrl breaks site settings link when creating a site using Site Template.

So ran into this issue today, we created a new SharePoint 2013 site template based on a previous site, we were using a modified version of seattle.master and using an alternateCSS, so after creating a subsite using this site template, everything works fine, but the site settings doesnt work anymore and the uls logs throws the following error below.

pic1

So to overcome this issue by removing the alternate header using powershell, we can use Pete’s post here,

Or we can reapply the Alternate CSS Url to all subsite from the site collection, however, none of these actually resolve the issue,

The only resolution is to add the CSS to the master page using the CSS Registration tag and publish the master page. This has resolved my issue and i am able to create sites using the site templates without any issues.

<SharePoint:CssRegistration
name=”<% $SPUrl:<< add the relative url of your css file>> %>”
After=”corev4.css”
runat=”server”
/>

Hope it helps

Kartik

SharePoint 2013: Service Applications, Configuring Farm Trust between publishing and consuming service applications.

Architecturally Service Application hasn’t changed much from SharePoint 2010. There however have been numerous new enhancements at the platform level. The key difference that you will immediately notice are

1. Office Web Apps is no longer a service application

2. Web Analytics is no longer available as a service application

3. FAST is now included in the search service application.

4. New App Management Service Application, Machine Translation Service, Access Service.

5. Cross farm Capable: Machine Translation Service, Business Data Connectivity Service, Managed Metadata Service, Search Service, Secure Store Service, User Profile Service.


 

While Installing/ configuring Service Applications, the current best practice is to keep all of your service applications in one application pool.

Each Service Application would require its own post, so i would cover these in the later posts

1. Search Service Application

2. User Profile Service Application

3. Managed Metadata Service Application

4. Work Management Service Application

One key thing i would like to cover in this post is Publishing and Consuming Service Application across farms. Although, most of this can be done by UI, we would need to run some scripts to setup the farm trust and the application discovery and load balancer service application.

How to setup the Farm Trust.

1. On the publishing server(PubServer1), create a folder C:\PublishingCertificates

2. On PubServer1, on an elevated management shell, type in

$rootCert = (Get-SPCertificateAuthority).RootCertificate

$rootCert.Export(“Cert”) | Set-Content C:\PublishingCertificates\PubRoot.cer -Encoding byte

3. Copy the C:\PublishingCertificates folder onto the Consuming Server(ConServer1).

4. On the ConServer1 server, create a folder C:\ConsumingCertificates

5. Repeat step 2 on the consuming server change the folder location to the location on step 4.

$rootCert = (Get-SPCertificateAuthority).RootCertificate

$rootCert.Export(“Cert”) | Set-Content C:\ConsumingCertificates\ConRoot.cer  -Encoding byte

6. Get the STS Certificate

$securetokencert = (Get-SPSecurityTokenServiceConfig).LocalLoginProvider.SigningCertificate

$securetokencert.Export(“Cert”) | Set-Content “C:\ConsumingCertificates\ConsumingSTS.cer” -Encoding byte

7. Copy the C:\ConsumingCertificates folder to the PubServer1

8. On the ConServer1, load the publishing server’s certificate

$trustCertificate = Get-PfxCertificate “C:\PublishingCertificate\PubRoot.cer”

9. Now lets setup the trust

New-SPTrustedRootAuthority PublishingFarm -Certificate $trustCertificate

10. Now on the PubServer1.

$trustCertificate = Get-PfxCertificate “C:\ConsumingCertificates\ConRoot.cer”

$securetokencert = Get-PfxCertificate “C:\ConsumingCertificates\ConsumingSTS.cer”

New-SPTrustedServiceTokenIssuer Collaboration -Certificate $securetokencert

11. I know you are tired switching back and forth, but its almost over, Now lets go back to Conserver1 and type below

Get-SPFarm | Select ID

Copy the GUID.

12. Return to Pubserver1. Lets get the application Discovery and load balancer service

  1. $security = Get-SPTopologyServiceApplication | Get-SPServiceApplicationSecurity
  2. $claimProvider = (Get-SPClaimProvider System).ClaimProvider
  3. $prinicipal = New-SPClaimsPrinicipal -ClaimType “http://schemas.microsoft.com/sharepoint/2009/08/claims/farmid&#8221; -ClaimProvider $claimProvider -ClaimValue << paste the GUID from step 11 here>>

4. Grant-SPObjectSecurity -Identity $security -Prinicipal $prinicipal -Rights “Full Control”

5. Get-SPTopologyServiceApplication |Set-SPServiceApplicationSecurity -ObjectSecurity $security.

This should setup the trust between consuming and publishing servers. If you wish to remove the trust, you can go to Central Administration–>Security–>Manage trust.

Now you should be able to publish/consume service applications using the central administration.

 

Hope this post helped you.

-Kartik

 

SharePoint 2013: Architecture, Design, Streamline Topology

As you compare SharePoint 2013 with previous versions of SharePoint (2003, 2007 or 2010), functionally you will see a lot of changes and improvements from 1 version to the next, however, when you see how SharePoint is architected over the years, much hasn’t changed from 2003 to 2007, with 2010 with the service application framework provided us to scale out the SharePoint Farm which in my opinion a great improvement when you work on global environments. SharePoint 2013 has a few new features that greatly improve the performance and UI responsiveness. Understanding the concepts below is key to architecting your SharePoint Farm. Each topic links to the SharePoint technet article that gives a detailed explanation.

  • Distributed Cache Service, is a customized app fabric for SharePoint 2013 that manages, authentication, newsfeed, security trimming and page load performance thereby reducing the load on the SQL Server.
  • Request Managementrule based approach that helps to distribute specific workloads.
  • Office Web Apps: Office Web Apps is needs to be installed and running on a Windows Server without any SharePoint dlls installed on that server. This provides you to use this server for other applications like lync, exchange, however, if you have more than 20K users have a dedicated OWA server.
  • Minimal Download Strategy. Well, you might have heard about the new feature called Minimal download Strategy MDS, i am not going to go in detail about MDS in this article, but there are 2 key things you need to know, 1. The key concept of MDS is to reload only those elements on a page that have changed from the previously loaded page. and 2. As of this writing, Minimal Download Strategy is not compatible with sites that have the publishing feature. Actually it performs worse if you have publishing and the MDS features turned on, check my load/webperformance testing article for conclusive results. The reason i have included this in the architecture section, is to ensure while you are developing solutions for your new SharePoint 2013 environment or Migrating SharePoint 2010 master pages, take this into consideration, the benefits of MDS are pretty evident if you have a global collaboration environment with huge WAN latency and slow page load times.

Hardware and Software Requirements

I am not going to cover this section and i don’t think anything more can be said about this topic apart from what is listed in the Microsoft documentation, since each company and their infrastructure is different and one cannot give a recommendation without running stats on WAN traffic , latency , connectivity, Load balancer(software, hardware), client os etc.

 

Architectural Designs. 

Alright, now lets get to the main part of this article, architecting a SharePoint farm, so what is the best SharePoint farm architecture model? Well, the simplest one.

The best performing SharePoint farms are the ones with the most simplistic design and are designed based not around the number of servers or the amount of RAM but the user behavior and how an end user interacts with SharePoint. So make sure to run some stats on your old SharePoint or content management systems to check some of these criteria

1. Total read vs total writes, are you building a SharePoint farm that is read heavy or write heavy or both, if its an intranet farm with static pages, consider adding a load balanced WFE with sticky session and WAN accelerator to improve performance.

2. Average size of a document in your CMS. (if you average size is 1MB make sure to have enough RAM and CPU for users to upload the download these documents in less than 3 seconds), ensure to use MDS. User Office web apps, so that users open documents in browser on server , rather than download then over the wire.

3. Total number of active users : Ensure at what time most of your users are active and configure your long running processes outside this time frame. Also, configure you app pool recycles when there is no or less user activity.

From a conceptual standpoint, one of the key architectural design that i feel has been performing pretty well for various number of users from 20,000 to 100,000 end users is the new Streamline topology. The concept of Streamline topology has been around but the performance is greatly improved with the new Distributed Cache and Request management features for SharePoint 2013.

Front End servers : Conceptually, the idea is have all the services and service applications that an end users interacts to be running on the WFE, this ensure immediate request get and post and considerably improves the User experience. These servers are optimized for fast performance.

Application Server or Batch Processing Server: In this tier the servers are designed to take greater load and usually run most of the batch processing tasks, like Workflow, User Profile Synchronization etc. The idea is to ensure irrespective of the load on these server an end user is not directly impacted by it.

Database Server: This tier remains the same as the previous version, if you have a higher load, use database clustering.

Enterprise Search Server: Since SharePoint 2013 and FAST is integrated into a single product, i would highly recommend creating a dedicated Search Server, especially if you want to use continuous crawl(which should be used in all environments). I would also recommend a dedicated Search Database server. With SharePoint 2013, you will eventually fall in love with the capabilities the new Search provides and the content search web parts, search analytics, Search connectors to 3rd party applications etc.

Distributed cache and request management servers: I would highly recommend if you plan to use SharePoint social and have more than 15,000 users then have a dedicated distributed cache and request management server, if you do not then you can have them on the Web front end servers.

Office Web Apps Servers: I do like the de-coupling of office web apps from SharePoint, it makes more sense that you could use 1 server or a office web apps server farm that can server many applications like SharePoint, lync and exchange. If you have a global environment with 80,000 employees, make sure to create a Office web apps farm dedicated to SharePoint, especially if you decide to have all the SharePoint sites use Office Webapps to open the documents. If you have 20,000 users, you can get away with 1 dedicated SharePoint office web apps server. However, if your company plans to use office web apps for Lync, Exchange and SharePoint, create a load balanced Office web apps farm.

I am not going to go into detail about SSRS integration,  Excel calculation servers, power pivot servers in this article, but will try to cover them in my upcoming BI article.

The streamline topology pdf document also provides a use case of Microsoft’s Office division, which i think is a good place to start, however, it doesn’t provide much details about the where the users are located, WAN connection type, Storage optimization, workflow , SharePoint social etc.

If you want to use the streamline topology follow my article about SharePoint installation which is designed to use the Streamline topology.

Hope this article helps you to at least guide you in the right direction architecting your SharePoint farm. Make sure to start simple and then add servers as you see the need

-Kartik

 

Follow

Get every new post delivered to your Inbox.