Friday, 7 August 2015

Enabling Yammer Enterprise with SSO in dev/test Office 365 environments

This is the 3rd article in my Challenges in Office 365 development – and how to address them series. Today we’ll focus on Yammer – for my team, it’s increasingly common to do small bits of development around Yammer, and so having the right set-up in dev/test environments starts to become important. This could be non-coded approaches such as use of the “Yammer Embed” approach such as showing the feed for the “Development” group in Yammer on their team site home page, or using Yammer for comments on an intranet page. But we also find ourselves implementing simple things with the Yammer API, such as automatically adding users to a Yammer group when they are granted access to a team site. With these cases, you really want the behaviour in test environments to be exactly the same as in production. But before we go any further, let’s put this in context of the overall series content:

Why Yammer Enterprise is important in dev/test environments

Let’s be clear, it *is* possible to develop for Yammer without having Yammer Enterprise – and the two scenarios I mentioned above can indeed be accomplished with the free version of Yammer. But it’s also fair to say you’ll hit some trade-offs around user identities and single sign-on (SSO) between Office 365/Yammer. If you have multiple Office 365 environments (e.g. dev/test/production), you’ll most likely be in a situation where only production has the true user experience that end users will see. This might be acceptable to you, but equally it can make things challenging for testing (“OK, but just imagine you don’t have to sign-in here!”)
Specifically, the things you’ll miss without Yammer Enterprise are:
  • Users have to manually register with the Yammer network (or be invited) using their e-mail address – this step will create their Yammer profile
  • Single sign-on will not work – any Yammer functionality on your pages (e.g. a feed, or a “Like” button) will show a link to sign-in to Yammer, and navigating to the main Yammer site will also require sign-in, rather than just automatically passing you through
  • All of the Yammer features which come with Yammer Enterprise, such as the reporting and administration tools
On a related note, developers using Yammer Embed will notice the “use_sso” flag on the JavaScript object in the Embed code (see From memory, even with Yammer Enterprise enabled this has to be set to “true” to avoid the behaviour listed in the 2nd bullet point above.
Also, it’s probably also worth mentioning that there are different flavors of SSO with Yammer. Here I’m referring to “Office 365 SSO with Yammer”, but note it was always possible to implement a form of Yammer SSO before Office 365 came along – this was typically using a SAML-based IdP (such as ADFS) for both Yammer and your other corporate systems. Depending on your organization’s situation, even if you use Office 365 you may actually find that you can actually only use “plain” Yammer SSO, rather than Office 365 SSO (e.g. you have users without e-mail addresses or multiple Yammer networks). See for more info.

A note on licensing and other pre-requisites

So, we’d like to use Yammer Enterprise even in dev/test environments if possible. However, this does require licenses for any user who will access Yammer since we’re going beyond the free version. Consider the following:
· If your dev/test Office 365 tenant is on an Enterprise plan (e.g. E1, E3) your users will automatically have a Yammer license – these do not need to be assigned in the Office 365 tenant admin screens
· If you’re on some other kind of plan (e.g. a SharePoint-only plan such as SharePoint P2), then you can “additively” pay for individual Yammer license as a bolt-on – these do need to be assigned to individual users
In addition to licensing, another key pre-req is that Yammer Enterprise can only be activated on an Office 365 tenant which is integrated with a custom domain - in other words, your user accounts will be “” rather than “”. This would either be done in the normal way during the initial set-up process, or for dev and test environments you could use the approach I discuss in Implementing AD integration with Office 365 using a sub-domain (for dev/test)
But assuming licensing and domain integration are in place, you’re good to go with Yammer Enterprise.

HOW-TO: configure Yammer Enterprise for an Office 365 environment

Go to Office 365 tenant admin dashboard, then click “Included Services”, then on the right of the page, click “Yes, activate Yammer Enterprise for my network”:
You’ll then be asked which domain you want to use for Yammer – assuming you only have one verified/integrated domain, you’re simply confirming you want to proceed:
At this point, the Yammer network provisioning process starts:
Once complete and you see the above message, you should be able to click the “log in to Yammer” link and sign-in. On the next screen, some home realm discovery stuff kicks in and detects that you can be signed-in on your current identity without providing a password:
From there you can complete the sign-up wizard – adding any Yammer colleagues, joining/starting groups, and adding a profile photo:
And voila – you now have Yammer Enterprise!
If you haven’t done already, you should switch the Office 365 tenancy to use Yammer for social:
Once all these steps are complete, you’ll know the Yammer network is a Yammer Enterprise because you should see the additional administration tools:


So that’s how to enable Yammer Enterprise in an Office 365 tenancy being used as a dev or test environment (or production for that matter – the process is the same). If you go through this process, your dev/test Office 365 environments should behave just like your production environment. This can often be an important point for testing, especially around mobile devices and other non-domain joined devices, and especially where Yammer Embed or any custom Yammer functionality has been implemented.
This series will continue to discuss options and techniques for improving Office 365 development. Other posts:

Wednesday, 15 July 2015

Debugging errors in SharePoint add-in (app) development

If I was to summarize this post in one sentence, I’d say “if your SharePoint app/add-in is giving a generic error such as ‘An error occurred while processing your request’, then you’ve probably got the add-in registration/authentication stuff wrong!”. This applies to provider-hosted SharePoint add-ins rather than SharePoint-hosted add-ins, and also doesn’t apply to apps which use the Office 365 APIs as an alternative to the SharePoint add-in model. But for anyone developing/deploying SharePoint add-ins, this is a really common experience for the modern SharePoint/Office 365 developer – everybody screws this up at some point :) There are numerous forum posts and probably some existing articles about this, but I always seem to fall into this trap (usually with a typo or something) and then spend 1-2 hours debugging and searching the internet until I remember what specifically the problem is. So, I just wanted to take a short diversion from my Challenges in Office 365 development – and ways to address them series to write about this; partly just to have a quick reference for the next time I run into the problem, because I know I definitely will :)

Symptoms of the problem

When you click on your add-in, you are taken to the remote website (e.g. on localhost, or in Azure, IIS, or wherever you published it to) but instead of seeing the default page you simply see a white page with a simple message:
This happens because of this boilerplate code which is found in most SharePoint add-ins – see the 'CanNotRedirect' case in particular:

Uri redirectUrl;

switch (SharePointContextProvider.CheckRedirectionStatus(Context, out redirectUrl)) 
    case RedirectionStatus.Ok: 

    case RedirectionStatus.ShouldRedirect: 
        Response.Redirect(redirectUrl.AbsoluteUri, endResponse: true); 

    case RedirectionStatus.CanNotRedirect: 
        Response.Write("An error occurred while processing your request."); 

If you perform any debugging and step through the code, you’ll probably end up in the TokenHelper class, looking at this method:

public static string GetContextTokenFromRequest(HttpRequestBase request) 
    string[] paramNames = { "AppContext", "AppContextToken", "AccessToken", "SPAppToken" }; 
    foreach (string paramName in paramNames) 
        if (!string.IsNullOrEmpty(request.Form[paramName])) 
            return request.Form[paramName]; 
        if (!string.IsNullOrEmpty(request.QueryString[paramName])) 
            return request.QueryString[paramName]; 

    return null; 

In many cases, what you’ll find is that null is being returned from this method – effectively no value can be found for any of the specified form parameters, either in the URL or the post body. We’ve successfully passed lots of earlier code and a valid context token is passed from SharePoint/Office 365 into the add-in, but we’re failing at this point.

Digging deeper

When you click on an add-in (e.g. in the SharePoint Site Contents page), you are taken to AppRedirect.aspx – this page eventually does a redirect to the redirect URL you should have specified in the app registration. This is a form POST, and in the body of the request is token needed by your remote code. Or at least, it should be. In the case where things aren’t working, you’ll notice that a token is NOT passed (e.g. in Fiddler):
Add-in redirect - Fiddler 1
Add-in redirect - Fiddler 2
We can see two things about the body of the form post from the image above:
  • The SPAppToken parameter is empty
  • The SPRedirectMessage parameter gives us a clue about my specific problem, with a value of “EndpointAuthorityDoesNotMatch
To be clear, the EndpointAuthorityDoesNotMatch message is just one flavor of this problem – you might see something else here depending on precisely how you’ve messed up your app registration/authentication setup :) For example, you might have the add-in registration all correct, but you’re trying to run the application on HTTP rather than HTTPS – in that case, you’ll see:
  • An empty SPAppToken
  • The SPErrorInfo parameter containing something like “The requested operation requires an HTTPS (SSL) channel. Ensure that the target endpoint address supports SSL and try again.”
The message will be encoded so will look more like this (for anyone pasting this into an internet search):
If you’re developing a high-trust add-in (for on-premises SharePoint), another flavor of the problem is that you haven’t lined up the authentication requirements – in particular the token-signing via a certificate aspects. See Package and publish high-trust apps for SharePoint 2013 for more info if this is your scenario.
In my case, EndpointAuthorityDoesNotMatch is telling me that the URL of the remote application does not match what is expected from the app registration for this client ID. I can use the /_layouts/15/AppInv.aspx page to lookup the details I specified when I registered my app with /_layouts/15/AppRegNew.aspx. IMPORTANT: if that last sentence doesn’t mean much to you and/or you’re starting out in add-in development, then your mistake is probably that you just haven’t registered the add-in at all! The next section might help:

Background - key things to know/remember when developing add-ins

Add-in registration

Add-ins need to be registered with SharePoint/Office 365 with a set of required permissions requested (which the person installing the add-in needs to consent to) – without this, you effectively just have some random code outside of SharePoint which is trying to read/write data, and so the security mechanisms are kicking-in and your code is blocked. This can be done via the Seller Dashboard for add-ins that will be widely distributed (even if just amongst your own environments), or using the AppRegNew.aspx page for other types. I suggest reading the following if you need some background on this - Guidelines for registering apps for SharePoint 2013
“Development mode” vs. “packaging for deployment mode”
Visual Studio really tries to help you out when developing add-ins. When I hit F5 to launch and debug my add-in, quite a few things are taken care of so I can get running quickly and see my code - I think of this as “development mode”. Let’s talk about what happens here, and also what you need to think of later on when packaging for deployment to another environment.
Development mode:
Upon pressing F5, the add-in is packaged up and deployed to the SharePoint site specified - this can be local or in SharePoint Online, and any previous installations there are removed. Additionally, the add-in is automatically registered with this SharePoint environment. Assuming the website for the add-in (i.e. the remote ASP.NET site) can indeed be browsed on the URL specified, then the F5 process opens up a browser window where the app is being installed/trusted, and you can then click on the icon and enter the app. Some other points to note:
  • The URL for the remote ASP.NET website is specified in the properties for the add-in web project – by default, a “localhost” URL such as https://localhost:44311/ is used against your local IIS Express instance. It’s possible to change this in the project properties, for example to a named virtual directory URL for use with a full IIS instance.
  • If using IIS Express, you should consider if localhost can be used with HTTPS on your machine. I had it in my head that it was possible somewhere to specify that it was SSL should NOT be used for the remote website – but frankly I can’t now remember if this true or find how it’s possible now, so that could be nonsense :) I always run on the full local IIS instance and have SSL running on localhost, and that works for me.
  • Authentication also needs to be considered. Often you’ll need to ensure “Windows Authentication” is enabled on the web project, so that the current user can be identified and authentication back to SharePoint can take place. If, however, you’re developing an Office 365 add-in which will authenticate to Azure AD, this isn’t the case.
  • In development mode, your app manifest is expected to have a ClientID value of “*”. Visual Studio will replace this at packaging/debug time with the correct value.
Packaging for deployment mode:
When you’re ready to think about deploying the add-in where other people can see it (e.g. Azure, some IIS servers, or somewhere else that isn’t your local dev box), you need to change a couple of things in your files before deploying. For an end-to-end walk through of the deployment process (using Azure Web Apps as the hosting platform), see my Deploy SP2013 provider-hosted apps/Remote Event Receivers to Azure Websites (for Office 365 apps) post – it was written a while back but I’ve updated it since to account for changes in Azure and Visual Studio. But here are the key changes to make from “development mode”:
  • Deal with the add-in registration – go to AppRegNew.aspx in the SharePoint environment the add-in will be used in, and register the add-in (reminder – either see my last link or Guidelines for registering apps for SharePoint 2013 if you’re not clear on this step!) For the app/add-in ID, either generate a new GUID on the page or use the existing ClientId value in your project files – the key thing is that they match, that’s all.
    • Make a note of the ClientId and ClientSecret used/generated on the AppRegNew.aspx page – you’ll need these!
  • Ensure the AppSettings section of your web.config contains the ClientId and ClientSecret from the add-in registration (previous step) – overwrite the existing values if needed. The authentication code baked into every add-in by Visual Studio tools (TokenHelper.cs) will read the values from here.
  • Optionally, find your appmanifest.xml file and replace the “*” in the ClientId value with the add-in ID from the registration – effectively hard-coding the value in there. This step is technically optional, because the process of packaging the app in Visual Studio (next step) will actually replace the “*” with a GUID value you specify anyway, but it can make sense to not rely on this and explicitly specify the GUID in appmanifest.xml instead - for example, if you’re distributing the VS project files, or perhaps aren’t expecting to do much more local development/testing. Otherwise, we’ll deal with things in the next step.
  • Optionally, replace the “~remoteAppUrl” value in appmanifest.xml also – it’s the exact same deal /consideration as the previous point.
  • Publish both elements of the add-in (the add-in remote website and the add-in package) using the Visual Studio “Publish..” mechanism. Right-click on the add-in project and click “Publish..”, and you should see the dialog helping you publish the outputs of both VS projects:
    VS SP add-in publish dialog 1
    You’ll need to press both of these buttons :) The first one will help you publish your web project – if you want to publish to Azure easily, ensure you have the Azure SDK installed. This will allow you to “discover” and publish to Azure Web Apps (websites) within your subscription, without having to separately download the publishing profile used for the connection.

    The second button will package the app – you’ll be asked for some key details which will be baked into the package:

    VS add-in publish dialog 2

    During the packaging process, a couple of things will be baked into the appmanifest.xml in the .app file which is generated (and this is why some of the previous steps are optional):
    • The ~remoteAppUrl token will be replaced with the remote website URL you specify
    • The “*” in the ClientId value will be replaced with the value from the Client ID textbox above

      NOTE - you won’t see any changes in your source appmanifest.xml file. It’s only if you crack open the add-in, by renaming from .app to .zip, and examining the files inside will you see where this has happened.
  • Take the add-in package and deploy it to the SharePoint add-in catalog for your environment (be in Office 365 or on-premises SharePoint). Drag the .app file into the catalog, and add any additional details (e.g. add-in description, screenshots etc.) to the list item for the file.
The add-in is now available to be installed to SharePoint sites.

Back to my problem

Anyway, AppInv.aspx allows me to enter a client ID/app ID (which identifies an add-in/some remote code) and see the details of the registration:
When I click the “Lookup” button, the other textboxes get populated with the details of the registration, as shown above. In my case, I can now see the mistake I made – it’s the “www.” in the app domain/URL. When I consider the URL my remote site is actually running on, it’s actually just https://cob-[foo] without the “www.” – as you can see if you look back at the first image in this article.

So, the solution in my case is to re-register the add-in with the correct URL.
It’s not possible to modify an existing add-in registration, so you need to do a new registration with a new App ID (and update the Client ID value in your app manifest etc.) You can tidy up the old app registration by going to AppPrincipals.aspx and deleting the original app principal there.


There are a few common pitfalls around developing SharePoint add-ins, and even with a reasonable level of experience it’s easy to fall into one of these! Some key things to look for include mistakes in the add-in registration, ensuring you know the difference between “development mode” and “packaging for production mode”, and also the need for HTTPS on the remote web application. If everything is straight, your add-in should work fine however.
Steve Peschka has some more good tips in this area at

In the next post, I’ll resume my series on Challenges in Office 365 development – and how to address them.

Wednesday, 17 June 2015

Implementing AD integration with Office 365 using a sub-domain (for dev/test)

As discussed in Challenges in Office 365 development – and how to address them, it’s fairly common to create multiple Office 365 tenancies to get around the fact that there’s currently no such thing as a “test environment” in Office 365. However, as the previous post discusses in detail, it’s certainly true to say that some trade-offs come with this method, generally related to environmental differences, code issues, testing, Application Lifecycle Management and so on. This article series attempts to provide some options and techniques which might be useful:

Summarizing the problem

Non-production tenants used for dev and test typically lack some of the aspects of the production environment – things like:

  • Directory integration (i.e. users sign-on with their “real” Windows account – e.g. “” rather than “” cloud accounts)
  • Synchronization of user profile data between AD and Office 365
  • Yammer SSO (or Yammer in general!)

And this leads to a long list of compromises – from not being able to test the real user experience properly, to sometimes having to write code to use a different “mode” in dev/test compared to production (perhaps because user account names work in different ways, or profile data isn’t populated).

Background – using one AD and multiple subdomains for Office 365 integration

So, we’ll look at one option for mitigating this issue – a technique which allows configuring non-production Office 365 tenancies with full directory integration and synchronization of user account data, but WITHOUT the need for a separate custom domain and Active Directory for each (and all that entails). This post is effectively a big HOW-TO article which describes the configuration process. In a team working on perhaps 15+ Office 365 implementations at any one time, you can probably see the attraction of this technique for us! Frankly, it can be challenging in the enterprise to get a test AD and domain set up which can be used with test Office 365 environments – with servers, networks, AD, security and operations all involved, quite a few people across I.T. might be needed there. So, the idea of having ONE Active Directory which could be used for directory integration with MULTIPLE Office 365 environments can be useful for lots of different teams in many different contexts.

What we're doing here
Ultimately the way we achieve this is to perform the regular “custom domain integration with Office 365” config steps, but with a couple of tweaks. Office 365 administrators and platform people will be very familiar with the standard process, and it’s a key element of hybrid SharePoint/Office 365 configuration. However, most developers will be less familiar with this stuff – but I personally think it’s hugely beneficial for any technical person working with Office 365 to have been through this process at least once.

At this point, I need to make clear that a lot of the smart thinking behind this approach actually comes from one of my colleagues at Content and Code – I’m really just being the mouth-piece here ;) Tristan Watkins is Head of Research and Innovation at Content and Code, and Tristan did the initial work of proving the approach – awesome work as usual.

What we actually do is use a sub-domain for each different Office 365 tenancy we want to integrate with the Active Directory/domain. So this could be:


Of course, you might substitute “Client” with “Project”, or anything else that you’re using different tenancies for.

Or perhaps:


..and so on.

Ultimately where we’re going with this is that we’ll have one AD which can provide accounts to multiple tenancies. Whilst it may not be appropriate for production, this can really help us with our mission to make dev/test environments more like production. User accounts in Office 365 don’t *have* to come from AD when cloud identities are used (more on this later), but it can be useful to stay closely with the model that you’ll use in production and implement sync between AD and O365.

Things you’ll need

We need to be ready with each item in the list below – if not, you’ll need to purchase/obtain the following:

  • An Office 365 tenant ready and provisioned
  • The top-level domain (e.g. – this needs to be something you/your organization already owns and has registered
  • Ability to manage DNS for the domain (usually at the ISP hosting the domain)
  • Ability to create user accounts in the AD you’ll use

At a high level, we go through the usual steps to add a custom domain to an Office 365 tenancy – I’m using GoDaddy to host my domain/DNS, and Office 365 offers some useful integration/simplified admin with this and some other hosters. However, with the sub-domain approach, even if we ARE using GoDaddy to host our domains, we’ll need to manually configure our DNS records rather than can click any of the “magic buttons” to buy or configure the domain (like this one):

GoDaddy shortcut

So, just be aware that although Office 365 has numerous “simplified admin” options if you’re using GoDaddy, for the most part we’ll be ignoring these and doing some extra configuration ourselves.

HOW-TO: configure Office 365 domain integration with a sub-domain

For reference, in this walkthrough I’m using the following:

Office 365 domain

Custom domain (top-level)

Subdomain used for this tenancy

cobsp this gives:

- URL =

- Usernames =

So here’s the process. Firstly we go to the Office 365 admin center, and then into the “Domains” area. Once there, we click the “Add domain” button on the “Manage domains” screen:

Add domain

We then start to step through the configuration of adding a custom domain to Office 365:


Hit the “Let’s get started” link on this screen. On the next screen (below), we should enter the subdomain/domain we are intending to use for this Office 365 tenant – as mentioned previously, we’re going to need to add DNS records there so we should be ready for that.


Once the domain is entered, hit “Next” and see the screen below. Here, you need to NOT sign-in to GoDaddy and have Office 365 attempt the config for you. Instead, the DNS records need to be created manually, so click the “use a TXT record” link as shown below:


This next screen gives details of the TXT record you’ll need to add at your hoster (GoDaddy in my case):


Doing this helps authenticate the fact you do indeed own and control the domain, and aren’t trying to pull some sneaky internet trick. So, we now need to go to the control panel of the ISP hosting our domain, and specifically the DNS area. At GoDaddy, it looks like this:


I use the “Add Record” link to add a TXT record with the details Office 365 gave me. Once done, I wait a while and then go back to Office 365 and confirm I’ve now done this step. If I added the TXT record as specified and DNS has propagated, I should see something like the following:


I click the “Next” button here. This will update any user’s e-mail addresses from to in my case:


You can then optionally add some additional users (manually). If you plan to sync users from an on-premises AD using AAD Sync you probably won’t use this option, but for dev/test environments you might want to add a couple more users at this stage (assuming you have licenses for them):


Now we get to the important part – adding the main set of DNS records to support the integration with your custom domain:


Office 365 asks you which services you plan to use/integrate, so that you only need to worry about the relevant DNS records:


On the screen which follows, there’s another one of those super-handy shortcut links that unfortunately we can’t use (because we’re doing the non-standard thing of using a sub-domain) J Skip the “Add records” easy button, and instead click the “add these records yourself” link:


The screen which follows lists all the DNS records you need to add at your hoster. You now need to cross-reference this screen with the DNS panel there – for each item listed, configure the DNS record in the admin panel. The key difference compared to the standard process is that the sub-domain is specified in each DNS record, rather than just the top-level domain:


Here’s an example of adding a record in the GoDaddy DNS admin panel:



Once you’ve added all the records, go back to Office 365 and click that “Okay, I’ve added the records” link. Office 365 will now check the records you added, and if successful you should see a confirmation message as shown below:


Your custom domain is now integrated, and the status should be reflected on the “Manage domains” admin page:


Review - so what do we have now?
At this point, we now have our on-premises AD integrated with our Office 365 tenancy, using a sub-domain. However, we don’t have user accounts actually existing in Office 365 yet – accounts can be created there in different ways, and we need to use one of them to actually get identities up into Office 365 for testing. This applies regardless of whether we’ll actually log-in with these accounts, or simply use them as “passive” users. Ultimate these are cloud identities, so if these users will log-in our model is to use cloud authentication rather than the other option (federated authentication e.g. with ADFS or similar).

By the way, if some of this is confusing and you need a good background on the different options for identities and authentication in Office 365, I’d suggest starting with User Account Management on TechNet.

Actually creating users in Office 365 – AAD Sync, manual, or bulk update

With or without a custom domain (or subdomain), users can be created in the following ways:

  • Manually in the Office 365 administration UI
  • Using PowerShell
  • Using bulk upload from a CSV file
  • Using AAD Sync
  • Using an Exchange mailbox migration

In other words, it’s worth remembering that even when you’ve integrated a custom domain with an Office 365 tenancy, users do not *have* to come from AD. However, this is definitely possible if you want to stay close to production and perhaps have synchronization occurring from AD to Office 365.

Using AAD Sync to sync users from AD to Office 365

Implementing sync from AD with this sub-domain approach is made possible by adding an alternative UPN suffix to AD domain (as detailed in the “Assign a UPN domain suffix” section of Configure Office 365 for SharePoint hybrid on TechNet) – this effectively allows us to have different users in the directory be matched to different Office 365 tenancies:


Once the UPN suffix has been added in AD, it’s then possible to assign this suffix to individual user accounts – thus “matching” them to a particular Office 365 tenancy and URL domain:


I also tend to put each set of accounts in a different OU so I can easily see accounts for each tenancy.

At this point, you could now implement AAD Sync to synchronize accounts – you would download and install/configure the AAD Sync tool, which is a step that Office 365 administrators will be familiar with:



This will provision FIM and the scheduled task for ongoing sync from AD. In FIM, you could then restrict the sync to a particular tenancy to users you earlier put in a specific OU:


If you want to do this against multiple Office 365 tenancies, one thing to be aware of here is that you’ll need multiple instances of FIM and the sync tool/task (as far as I’m aware). Since none of this sync stuff to dev/test is mission-critical for you, this can be managed pretty easily by just deploying AAD Sync in a couple of different small VMs (i.e. one per tenancy).

At this point, you’d now have users being sync’d from your AD instance to your dev/test Office 365 tenancy. Cool!

And finally, don’t forget that if you want these test users from AD to be able to login and consume Office 365 services, you’ll need to assign them a license in the tenancy.


Implementing directory integration is required in some Office 365 contexts (e.g. hybrid, and where Yammer Enterprise is used), but is useful in many others too. If you’re using multiple Office 365 tenancies to represent non-production environments, it’s a pain when dev/test don’t match the production set-up, especially if you find yourself implementing any custom functionality around user profiles. The approach detailed here provides one option for facilitating directory integration for dev/test Office 365 environments, without some of the infrastructure requirements and headaches that would usually come with this.

This series will continue to discuss options and techniques for improving Office 365 development. Other posts:

Monday, 15 June 2015

Challenges in Office 365 development - and ways to address them

Over the last 2 years, I've spent quite a lot of time thinking about "cloud-friendly" SharePoint development - approaches and techniques which will work for Office 365, but also on-premises SharePoint deployments which need to be designed with the same principles. With my team now having now done at least 20 or so Office 365/SharePoint Online implementations (and counting), in this time we’ve established certain ways of working with Office 365. A good example is creating multiple tenancies to represent dev/test/production to help with our ALM processes, since Office 365 doesn't have the concept of a test environment. Sure, on the SharePoint side (my focus here) you could just create a special site collection - but that doesn’t give you the isolation needed across any global elements. Examples include things like user profiles, taxonomy and search. And since our clients often ask for minor customizations which interact with these areas, the “test site collection” idea just doesn’t cut it for us. Instead, we go with the multiple tenancy approach, and I’ve advocated this for a while for anyone with similar needs.
As I’ve previously discussed at conferences, our choice for most clients/projects is to create separate dev and test tenancies, and it generally looks something like this:
It’s not the most important point here, but we tend to use different Office 365 plan levels for these environments - most of our clients use the “Office 365 E3” plan in production, and we’ll ensure TEST is on the same plan, but reduce costs by having DEV use “SharePoint P2” (so no Exchange, Lync or Yammer Enterprise). This generally works fine, because most of our development work centers on the SharePoint side. But regardless of what you do with plan levels, it’s also true that some trade-offs come with using multiple Office 365 tenancies (of any kind) – recently I’ve been thinking about options to mitigate this, and these broadly can be categorised as:

  • Making dev/test Office 365 tenancies more like production
  • Finding ways to test safely against the production Office 365 environment
The next few blog posts will detail some techniques which can help here. In this first post I’ll discuss the problem space - some problems I see with current Office 365 development which might lead you to consider these approaches. But in the article series I’ll be discussing things like:
  • Implementing AD integration with #Office365 using a sub-domain (for dev/test)
    • N.B. Directory integration is mandatory for true hybrid deployments or where Yammer Enterprise is used, but frankly is also useful to avoid other issues in dev/test
  • Enabling Yammer Enterprise with SSO in dev/test environments
  • Using Azure Deployment Slots to implement dev/test/production for apps hosted in Azure
For now, let’s talk about some of the issues you might hit in Office 365 development.

Challenges which come with multiple Office 365 tenancies

When we talk about dev and test environments, implementation teams always have a need to make these as similar to production as possible. The more differences, the more likely you’re going to have a problem - usually related to invalid testing or defects which only become apparent in production. Unfortunately, I notice our Office 365 projects do have certain trade-offs here. We really do want the multiple Office 365 environments for dev/test/prod (with the way Office 365 dev currently works at least), but it can be hard to make those other environments closely reflect production. Here’s a list of things which might be different:
  • Typically, directory integration is configured in production, but NOT for other environments
    • In other words, users sign-in with “” in production, but “” accounts are used in dev/test
    • [N.B. You might know this step as “implementing Azure AD Sync”, or “implementing DirSync” to use its previous name]
  • Lack of SSO to Office 365 for users logged on to the company network (which relates to the point above)
  • Lack of a full directory of users
  • User profiles are not synchronized from AD in dev/test environments
  • Lack of Yammer Enterprise
  • Lack of Yammer SSO
  • Different license types (e.g. E3/E4 in production, but something else in dev/test)

OK, but why should I care about these things?

Depending on what you’re developing, some of these things can definitely cause problems! Some tangible examples from our experience are:
  • It’s not possible to do end-to-end testing – we can’t see the “real” user experience, especially across connected services e.g. Office 365 and Yammer
  • The experience on mobile devices is different
  • Code sometimes has to be written a different way, or use a “mapping” in dev/test - especially anything around user profiles. For example, any kind of user name lookup/e-mail lookup/manager lookups and so on
  • Any integrations with 3rd party/external systems might not work properly if they use the current user’s details in some way (because a different identity is used)
  • Yammer – the lack of SSO means a couple of things:
    • Any standard usage e.g. Yammer web parts or Yammer Embed won’t “just work” - a login button is displayed, and the user has to supply secondary credentials to authenticate/get the cookie
    • Any Yammer API code might need a special “mode” – because you probably have Yammer SSO in production, but not elsewhere

What can we do about these challenges?

So, it would be nice if we could make this situation better. Many of the issues stem from the fact that dev/test environments don’t have identity integration and AAD Sync configured, and what’s generally getting in the way there is that standing up a dedicated URL domain and Active Directory often isn’t trivial. The good news is that the sub-domain/UPN suffix approach I’ll talk about in the next post allows you to get past this – regardless of how many Office 365 environments you have, all you need is one URL domain and one Active Directory. In dev/test for us, this means ONE URL that we registered at GoDaddy for all our clients/projects. We run the on-premises AD simply in a small VM, which runs on developer machines.
Once the domain integration is implemented, the next step is to implement AAD Sync to actually get some users from AD to Office 365.  It will run off a set of users you choose (perhaps you would group some users for each dev/test tenancy into different OUs), and will perform the step of actually creating the users in Office 365. You could then optionally assign them a license if you want this test user to be able to login and use Office 365 functionality, and actual authentication will happen in the cloud if/when the user logs-in. If you want to implement Yammer Enterprise and Yammer SSO, you can now do that too. Directory integration is a pre-requisite for both of these things, but having solved that problem without major platform/infrastructure headaches, these possibilities open up for our dev/test environments.


So that’s a little on the problem space. Ultimately, developing for Office 365 at enterprise level does have some challenges, but many can be overcome and we can still strive for robust engineering practices. The next few blog posts will cover some of this ground – I’ll add links to the list below as the articles get published:
Thanks for reading!

Tuesday, 19 May 2015

Presentation deck/videos – Comparing SharePoint add-ins (apps) with Office 365 apps

If you’re implementing customizations in Office 365, it’s fair to say that numerous approaches exist now, and that the service has certainly been enhanced as a development platform in recent times. However, one area of potential confusion is around how apps are developed. Developers and other technical folks became used to “the app model” over the past 2 years or so, as a way of implementing custom functionality which ran on non-SharePoint servers (perhaps Azure) and called into SharePoint using remote APIs such as CSOM or REST. But since then, Microsoft have introduced new APIs and authentication models for Office 365 – these sit above the underlying SharePoint, Exchange, Lync/Skype for Business and Yammer services in Office 365, and come with some differences in how the solution must be developed.

Notably, there are also differences in how end-users access the app, and also how administrators can make the app available (e.g. to specific users) in the first place. In all this, the original app model is not going away – SharePoint apps have now been renamed to “SharePoint add-ins”, but they are still are very valid implementation choice. So technical teams working with Office 365 often have decisions to make: should a customization be implemented as a SharePoint add-in or an Office 365 app?

For me, the key is understanding the impact on the different stakeholders – end-users especially, but also administrators and developers. The last group in particular need to understand the capabilities of Office 365 apps (e.g. what the APIs can do), to decide whether the functionality needed by the business can indeed be implemented with this approach.

My presentation

I presented on this topic at the SharePoint Evolutions 2015 Conference, and have now published the presentation deck and demo videos. The deck is shown below, and can also be downloaded from this link: SlideShare - Comparing SharePoint add-ins (apps) with Office 365 apps. There are 3 demo videos, and I’ve added captions to these and published them to YouTube – you can see these if you use the presentation below or obtain it from SlideShare. 


Hope you find it useful!

Tuesday, 12 May 2015

Info and some thoughts on Office 365 “NextGen” portals – Microsites, Boards and the Knowledge Management portal

Microsoft made some pretty big announcements at the Ignite conference I spoke at last week, which I think may change how intranets will be built in Office 365 (and maybe on-premises SharePoint too in the future?). I don’t normally try to be a “news blogger”, but for things that have a big impact on me/my team I like to make the occasional exception. Microsoft’s moves on “NextGen portals” and in particular the new Knowledge Management portal (codename “Infopedia”) are definitely in that category - there are several new CMS-like concepts here for Office 365/SharePoint. Some new “ready-to-go portals” will be made available to Office 365 users, and these come with a philosophy of “use more, build less”. As well as the KM portal, we can expect to see some kind of blogging capability soon, which may or may not be known as “stories”. These tools should work well for organizations who aren’t strongly tied to their own (often carefully-crafted!) list of requirements. If you’re already wondering about what happens when this isn’t the case, Microsoft *are* thinking about the customization strategy for NextGen portals. In the future it sounds like it will be possible to use various bits of what I’ll describe here as building blocks for custom portals – but that’s further down the line, and the “ready-to-go” offerings are likely to be around for a while first.

The impact

Immediately, a couple of things jump out at me:

  • If using the new portals, organizations will need to decide where they fit in the user experience with any existing intranet sites (just as they already do with Delve, Yammer and so on)
    • Notably, the NextGen portals come with their own look and feel. It’s not yet clear whether some level of branding can be inherited – but in the end, you’re kinda getting something for free here, so.. :)
  • The Knowledge Management portal could be seen as an “intranet-in-a-box”. There’s been a certain trend towards 3rd party SharePoint products of this type in recent years, but thinking as a client/user, I’m not sure why now you just wouldn’t use Microsoft’s. After all:
    • It’s free (i.e. part of your existing Office 365 licensing)
    • It’s got the development muscle of Microsoft behind it
    • It will be enhanced continually
    • It will be 100% supported by Microsoft
    • Whilst it might not match the precise functionality of a particular example product, you can probably get close enough to what you were trying to achieve. And it’s free ;)

Understanding how Boards, Microsites and portals relate to each other

The Knowledge Management portal is an example of a “ready-to-go” portal – just like Delve and the existing Office 365 video portal. You can immediately start creating sites and pages, but you won’t have much control over the functionality or look and feel. You get what you’re given, and if it works, great. If it doesn’t, well you don’t have to use it – ultimately it’s just another tool in the toolbox. I see it almost as a cross between a team site and a publishing site. Before we go into detail on different aspects, to help you frame things here’s what a page looks like:

Article page

The image shown above is an article page. As you’d expect, other page types exist such as a landing/front page – but implementers do not create custom page templates or page layouts.

The Knowledge Management portal isn’t the only thing to consider here though - Boards are another concept (which we see already in Delve), which can be used to present information and Microsites provide another layer. I think of it like this:



Good for


Some similarities to a board in Pinterest. Add links to documents and pages.

Lightweight/informal knowledge management – a relatively small amount of information around a topic.


A simple website, with a landing page and some navigation. Can contain boards, pages and documents.

More structured/larger amounts of information. Something more organised and perhaps with a more formal approach to contributions.

Knowledge Management portal

Contains microsites, SharePoint sites and boards. Adds other tools such as personalisation, simple analytics and use of Office Graph for a level of “intelligence” (e.g. content you might like).

A central area to sit above microsites – likely to become a big part of your intranet (maybe used as the home page, maybe not).

So, the KM portal contains microsites (amongst other containers, such as “regular” SharePoint sites), which in turn contain boards (and pages and documents):

Relationships between NextGen concepts

I’d imagine boards might be able to exist outside of a microsite too.


Boards can be used right now to collect information together – they are surfaced already in Delve, but will be used in other places such as microsites in the future. They are intended to be fairly simple to use, and have the following characteristics:

  • Boards show a collection of cards
  • Cards are currently *always* shown in date descending order (of when they were added to the Board) – there is no other control over which content is shown where
  • Anyone can create a new board
  • Anyone can add an item to a board
  • Boards are always public

For example, here’s a board I recently created here at Content and Code:

Our Development board

In the future, we can probably expect to see other hooks into boards – the image below was badged as “Ideas in the pipeline”, but I’d be surprised if there wasn’t something like it:

Add to board in doc lib


Microsites are pretty much what you’d expect – relatively simple websites, where lots of things are taken care of for you. Whilst it’s really the parent portal (e.g. Knowledge Management) that’s providing the “intranet-in-a-box” capability, some of the aspects are effectively provided by microsites:

  • More functionality and control than a board – i.e. not just cards, but pages too
  • Landing page
  • Article pages
  • Auto-generated navigation
  • Simple permissions model
  • Some social features
  • Responsive design – good mobile experience
  • Easy to create/contribute to

Here’s what the experience might look like down at the article page:

Microsite article page

The Knowledge Management portal (“InfoPedia”)

Whilst you’ll create many microsites, there is only one KM portal in the Office 365 tenant. It is effectively a container for microsites, but it’s more than that too:

  • Also bring in existing (or new) SharePoint sites and boards
  • Some extra top-level constructs to help users find what is valuable – navigation to different microsites, search tools and so on
  • An enhanced article page – with some additions focused on presenting knowledge
  • Tagging – to group together content in different sites
  • Personalised recommendations via Delve – both for consuming (“see related content”) and creating (“suggested content you might want to link to from this page”)
  • Analytics
  • A great mobile experience – potentially even for light content creation

Here are some screenshots:

The landing page:

Landing page

Showing sections within a microsite:


An article page within the KM portal:

Notably, this isn’t quite the same as an article within a regular microsite – there’s a greater focus on design (with the banner image), and some automatic in-page navigation (TOC style):

KM portal article

Some of the authoring experience:

Microsite authoring

A particularly rich article page (long vertical scroll in this case):

Microsite article - long

Responsive design/adaptive design: 

Providing a great mobile experience is a fundamental pillar to the NextGen portals vision. As you’d expect, pages scale down well and adapt well to the screen size of the device:


As you can see, it’s something like a publishing site with a specific look and feel. I think it could probably be used for any intranet site which focuses somewhat on presenting information – potentially even team sites which don’t feature heavy collaboration. Like other ready-to-go-portals, it’s a “railed experience” – meaning authors focus on the content, and how it gets presented to consumers is largely taken care of and not super-customisable.

Page creation/editing

Whether in the KM portal or any microsite, there is a streamlined experience to creating pages – this is great news because SharePoint was never the best or simplest CMS in this respect. Authors often found the ribbon complex, and the whole “navigate to a specific place, use the Site Actions menu, select a page layout” experience left a lot to be desired. Here are some notes I made on the new page editing experience:

  • No selection of a page template – just start authoring content
  • Simple formatting controls – bold, italics, font colour and size etc.
  • Easy to add text/images/documents/video
  • Auto-save
  • Auto table of contents style navigation for long pages
  • Simple picker for choosing images
  • Documents can be embedded in pages – this uses the Office Online rendering, just like the hover panel in search results or a document library. This means a PowerPoint deck can be browsed within the page, a Word document can be paged through and read within the page, and so on

The authoring experience:

Authoring in KM portal

How portals are implemented

There’s a whole layer of technical implementation detail to understand around all these concepts, and I look forward to digging around when they become available. Here are some of the notes I made:

  • Portals are effectively built as an extra layer above SharePoint. This layer encompasses a couple of things, including:
    • A Single Page App which supports portals and the specific things they do – this has various page controls (e.g. table of contents control, page rollup control etc.) and calls into the REST APIs underneath
    • The page rendering layer – all implemented as client-side code
  • They use site collection storage, but in a specific way – for example:
    • Pages are stored as BLOBS of JSON in the Pages library
    • Images are stored in the Assets library
    • Permissions are managed across the page and related assets (they are treated as one unit effectively)
    • Some other libraries are used e.g. a new “Settings” library

Here’s an image which depicts the railed experience (of the page template):

Article - railed 1

Article - railed 2

There’s a difference in how page data is stored too. Whilst we’re used to content types with various fields storing different data types, here it looks like the data will be stored in a larger JSON structure (still within the Pages library, but perhaps “fields” will only be represented in the JSON rather than physically exist on the content type):


Other Office 365 platform services such as Azure (for Video) and Office Graph (for suggestions) are used in the implementation too.

What customization might look like

As I mentioned earlier, it feels like Microsoft are (understandably) a lap ahead in terms of the end-user functionality compared to the extensibility story. However, the kind of things that were suggested in the Ignite talks were:

  • A future ability to build custom portals using the same building blocks used in the ready-to-go portals (I’m guessing the page rendering, data storage, page controls for roll-ups etc.)
  • Potentially some form of extensibility of the ready-to-go portals
  • A framework (e.g. APIs) for re-using JavaScript and CSS used in Microsoft’s portals
  • It should be possible to host custom portals in SharePoint Online – you wouldn’t need to provide your own hosting

I was interested to hear Dan Kogan (long-time Microsoft product manager) also hint they might even look at open-sourcing the entire front-end layer of NextGen portals.


It feels to me like this is a fairly big evolution of Office 365/SharePoint as a tool for managing information. The past few SharePoint releases have seen incremental improvements to publishing sites and team sites, but they still required a reasonable amount of experience for business users to get the most out of them. By providing tools at a higher level, it seems Microsoft are making a departure here – and technical implementers will need to “understand product” well if they are to make the right decisions and provide the right guidance. This is something I’ve been telling my team, and I think it applies whether you’re a developer, IT Pro, consultant or just about any other implementer.

As I mentioned earlier, it will also be interesting to see the impact on parts of the product industry which surrounds SharePoint and Office 365. As any product company surely knows, building around a stack from a company with deep pockets like Microsoft can be risky – as we saw with Microsoft’s backing of Yammer and the corresponding impact on other companies selling social products.

But personally I’m looking forward to getting deep with the new stuff when it arrives, and the challenge of continuing to provide expert guidance to clients as changes in the landscape continue to accelerate.

Sources (talks at the Ignite conference):