Tuesday, 3 March 2020

AI in Office 365 apps – choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate: Part 2

In the last article we talked about some examples of using AI in Office 365, and looked in detail at the the idea of building an incident reporting app which combines common Office 365 building blocks with AI. Whilst Power Apps and SharePoint underpin our solution, we use AI to triage the incident by understanding what is happening in the image. Is this a serious emergency? Are there casualties or emergency services involved? Our image processing AI can interpret the picture, and add tags and a description to where the file is stored in Office 365 - this can drive some automated actions to be taken, such as alerting particular teams or having a human review the incident. We also looked at the results of feeding in a series of images to the AI to analyze the results of different scenarios.

Overall, this article is part one of a series:

  1. AI in Office 365 apps - a scenario, some AI examples and a sample Power App 
  2. AI in Office 365 apps - choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate (this article)
  3. AI in Office 365 apps - pricing and conclusions
In this article, we'll focus on the how - in particular, comparing three approaches we could use in Office 365 to build our incident reporting app and make use of AI. Which is easier? What skills are required? How long might we expect it to take for each approach?

Choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate

As mentioned in the last article, there are a number of ways we could build this app:
  • Use of Power Apps AI Builder
  • A Power App which talks directly to Azure Cognitive Services (via a Custom Connector)
  • A Power App which uses a Power Automate Flow to consume AI services
For each approach, we’re looking at how the app would be built, pricing, and any constraints and considerations which come with this option.

Option 1 - Power Apps AI Builder

AI Builder is still in preview at the time of writing (February 2020 - release will be April 2020) with four models offered:

As you might expect, the idea is to make AI relatively easy to use and to focus on common scenarios. No coding is required, and anyone with Power Apps experience can now take advantage of what would previously have been highly-complex application capabilities.
How the app would be built
In our scenario, it’s the “Object Detection” model that is relevant. This can detect specific objects in images that are supplied to it, as well as count the number of times the recognized object is found in the image. The first step is to define a model and supply some sample images:



You'll need an entity pre-defined in CDS to use AI Builder - I'm skipping through a few screens here, but ultimately you select a CDS entity representing the object you are detecting:


In my case, I was building an app related to transport and my entity is "Bus". The next step is to start to train the AI model by providing some images containing the entity:



We then tag the object in each image with the entity:



Once all this is done, you can use the Object Detector control in your Power App and configure it to use this model:



Since the whole topic of how to build apps with AI Builder is interesting, I'll most likely go through this process in more detail in a future article - but hopefully you get a feel for what the process looks like.

In the case of our scenario, we said that we wanted the images to be tagged in SharePoint - and here's where we run into a consideration with AI Builder:

Power Apps AI Builder - great for SOME forms of image detection

The Object Detector capability allows us to detect whether a certain object is in an image or not, and how many times. However, our scenario demanded the capability to recognize *what* was happening in an image, not simply whether a predefined object is present or not! And that's what AI Builder provides - a percentage certainty of whether your single object is present or not. This is much less flexible other forms of AI image processing, and we'd need to somehow supplement this to achieve the goals of our application. After all, we can't provide an AI model with every known object in the universe..

Option 2 - Azure Cognitive Services

Another way of bringing AI into an app is to plug directly into Azure Cognitive Services. As you might expect, this as a developer-centric approach which is more low-level - we're not in the Power Platform or other low-code framework here. The big advantage is that there's a wider array of capabilities to use. Compared to the other approaches discussed here, we're not restricted to whatever Microsoft have integrated into the Power Platform. The high-level areas of Cognitive Services currently extend to:
  • Decision - detect anomalies, do content moderation etc.
  • Language - services such as LUIS, text analytics (e.g. sentiment analysis, extract key phrases and entities), translation between 60+ languages
  • Speech - convert between text and speech (both directions), real-time speech translation from audio, speaker recognition etc.
  • Vision - Computer Vision (e.g. tag and describe images, recognize objects, celebrities, landmarks, brands, perform OCR, generate thumbnails etc.), form data extraction, ink/handwriting processing, video indexing, face recognition and more
    • NOTE - this is the service that's relevant to the scenario in this article, in particular the Computer Vision API's ability to tag and describe images) 
  • Web search - Bing autosuggest, Bin entity/image/news/visual/video search and more 
In terms of what this looks like for our scenario, let's take the following image:


How the app would be built
If I can write some code to consume the Computer Vision API and send the above image to it, I get a response that looks like this (notice the tags such as "person", "indoor", "ceiling", "event", "crowd" and so on:

The code I used to do this is:

Code sample:

The relationship between Azure Cognitive Services and other options

Whilst we're talking about Cognitive Services, it's worth recognising of course that all of the options listed in this e-mail use these services underneath. Power Apps AI Builder, the Power Automate activities discussed here, and many other facilities in Microsoft cloud technologies all use Azure Cognitive Services underneath. When you're thinking about technology options, it's worth considering that the more direct your approach to Azure is, the cheaper it is likely to be.

Option 3 - Using Power Automate (Flow) to consume AI

The final option presented here is to create a Flow which will do the work of tagging and describing the incident report images. This is by far the easiest way I think, and is perhaps an overlooked approach for building AI into your apps - I recommend it highly, Power Automate is your friend. Note, however, that these are premium Flow actions - we'll cover licensing and pricing more in the next post, but for now understand that bringing AI capabilities this way does incur additional cost (as it does with the other two approaches).

In the scenario of our Power App for incident reporting, the simplest implementation is probably this:
  1. Power App uploads image to SharePoint document library
  2. Flow runs using the "SharePoint - when a file is created in a folder" trigger
  3. The Flow calls Azure Cognitive Services (using the native Flow actions for this)
  4. Once the tags and image descriptions have been obtained, they are written back to the file in SharePoint as metadata
The beauty is really in step 3. Since Microsoft provide hooks into many AI services as Flow actions, infusing AI into your app this way is extremely simple - no code is required, and it's as simple as lining up some actions in the Flow. Some skills and experience in Power Automate are certainly required, but the bar is certainly much lower than the other options. 

Here's what my end-to-end Flow looks like:

In more specific terms, the trigger is on a new file being created in the SharePoint site and list where my Power App pushes the image to:

For each file that has been added, I get the file and then call:
  • Describe Image Content 
  • Tag Image
The Flow section below shows how we get the details of the file added to SharePoint to be able to pass the contents to the Azure actions for processing:
On the Power App side of course, I need something to use the camera on the device to take the photo and upload the file to SharePoint - but that's not too complex, and it's just a question of adding the Power Apps camera control to my app to facilitate part of that. Of course, a major capability of Power Apps is being able to plug into device facilities such as camera, GPS and display functions, so it should be no surprise that's simple. If you remember, I showed my sample app briefly in the last post:



However, I do need to do some work to get my image into SharePoint once it has been captured - in my case I use the integration between Power Apps and Power Automate to do this. I create a Flow which uses the Power Apps trigger and ultimately uses the SharePoint "Create File" action. The important part though, is the Compose action in the middle which uses the "DataUriToBinary" function to translate the image data from how Power Apps captures it to how SharePoint needs it:

I then link the Flow to the Power App:

I can then use this in my formulas, as:

UpdateContext( { PictureFilename: "Incident_" & Text( Now(), "[$-en-US]yyyy-mm-dd-hh-mm-ss" ) & ".jpg" } );

IncidentPhotoProcessing.Run(PictureFilename, First(Photos).Url);

..and there we go, a fairly quick and easy way to get the photo for my incident into SharePoint so that the AI can do it's processing.

Summary


We've looked at three possible approaches in this post to building an Office 365 application which uses AI - Power Apps AI Builder, use of Azure Cognitive Services from code and use of actions in Power Automate which relate to AI. The findings can be summarized as:
  • Different skills are needed for each approach:-
    • Power Automate is the simplest to use because it provides actions which plug into AI easily - just build a Flow which can receive the image, and then use the Computer Vision actions shown above
    • Direct use of Azure Cognitive Services APIs requires coding skills (either use provided SDKs for .NET and JavaScript etc. or make your own REST requests to the Azure endpoints), but is a powerful approach since the full set of Microsoft AI capabilities are exposed
  • Capabilities are different across the options:-
    • Power Apps AI Builder has some constraints regarding our image processing scenario. The "object detection" model is great for identifying if a known object is present in the image, but can't help with identifying any arbitrary objects or concepts in the image
    •  Azure Cognitive Services underpins all of the AI capabilities in Office 365, and offers many services not exposed in Power Apps or Power Automate. As a result, it offers the most flexibility and power, at the cost of more effort and different skills to implement
  • Requirements and context are important:-
    • In our scenario we're talking about a Power App which captures incident data and stores it in SharePoint - in other words, we've already specified that the front-end should be Power Apps. In this context, integrating Azure Cognitive Services directly would be a bit more challenging than the other two approaches (but is relatively simple from a coded application). In Power Apps, we'd need a custom connector to bring the code in (probably in the form of an Azure Function), and that's certainly more complex than staying purely in the Power Platform 
    • In the design process, other requirements could lead to one technical approach being much more appropriate than another. As another example, if the app needed a rich user interface that was difficult to build in Power Apps, the front-end may well be custom code. At this point, there's an argument for saying that using code for the content services back-end of the application also makes sense too
As usual, having a team or partner who understands this landscape well and has capability across all options can lead to the best result. In this case, all options can be on the table rather than being limited to one or two because of skills. The architecture decision can be merit-based, considering the use cases and scenarios for the app, the AI capabilities needed, the user experience, the range of devices used, speed to implement and cost.

And cost, of course, is the element that we haven't covered yet! Let's discuss that in the next post:

Next article - AI in Office 365 apps - pricing of different options and conclusions (link coming in a few days!)


Friday, 28 February 2020

Infuse AI into your Office 365 apps – three approaches and pricing: Part 1

We’ve all heard how Microsoft are on a mission to democratize AI and bring it to the masses. The last Ignite conference (fall 2019) continued to bang this drum heavily, with several keynote demos featuring AI in interesting scenarios. In fact, Microsoft’s AI democratization journey started back in early 2015 with Project Oxford, a set of APIs which developers could use to recognize faces, perform speech-to-text processing, categorize images and more. Looking back, I remember presenting at Microsoft's Tech Ed 2014 conference showing an extension I created for Word which would use non-Microsoft AI to find similar documents based on the content - so AI has been around for some time. Also making it's debut around this time was LUIS (Language Understanding Intelligent Service), the Microsoft service that a bot developer would typically use to infer intent from some words. All this was great for developers, but true democratization would mean lowering the barrier to entry much further than that.

Fast forward to early 2020, and I think we’re in a different situation altogether. Microsoft’s AI capabilities have moved into well-defined Azure offerings as part of Azure Cognitive Services, and there will have been significant investment in performance, scaling, reliability and accuracy as individual APIs transition into service across the Azure datacenter infrastructure. Along with AWS, Google, IBM and other major cloud providers, Microsoft are out to land-grab as much of the global workload as possible, in order to recoup their infrastructure investments and make their margins work. Competition is fierce, and even if your organization isn't using AI significantly at the moment, it's you they are targeting. 

Alongside this, the Power Platform has emerged as Microsoft’s model for business applications which can be built without professional developer skills. So how easy is it now for an organization using Microsoft cloud tech to use AI, and what profile of person will be able to build such an app? This series of articles looks at several approaches, and also analyzes the pricing to consume AI in each case. Does easier to use AI come with an added cost? Which option is best if the organization *does* have developers or a partner?

This article is part one of a series:

  1. AI in Office 365 apps - a scenario, some AI examples and a sample Power App (this article)
  2. AI in Office 365 apps - choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate
  3. AI in Office 365 apps - pricing and conclusions
As we think about different approaches, a scenario is useful to base our analysis on - so let's think about that first.

The scenario

Some form of incident or situation reporting is a requirement across many sectors, ranging from monitoring hazards on a construction site, health and safety monitoring in a hospital, or even a store manager submitting evidence of merchandising to head office.So let’s say we’re building an incident reporting app which mobile workers will use in the field on their phones - I use this example frequently with clients, as it combines several ingredients for intelligent data capture apps. We’ve decided that the app itself will be a Power App to avoid costly native app development and for easy distribution to mobile devices, and that the photo and details of the image will be stored in SharePoint Online. AI will be used to detect what’s happening in the image – specifically, we’ll add some metadata to the image in the form of a description and keywords. The keywords will describe objects detected in the image and the overall setting. This is useful, because it could be used to automatically categorize the incident and/or alert different teams – using the health and safety example, if the keywords contained “casualty”, “injury” or “blood”, an alert could be raised immediately to a certain team. Other processing of the incident could also be built-in, depending on what other rules or workflows might be appropriate.

There are a number of ways we could build this app:
  • Use of Power Apps AI Builder
  • A Power App which talks directly to Azure Cognitive Services (via a Custom Connector)
  • A Power App which uses a Power Automate Flow to consume AI services
For each approach, we’ll look at how the app would be built, pricing, and any constraints and considerations which come with this option.

AI image processing - looking at examples

Before we go on to look at implementation approaches and costs, exactly how can AI help in an application like this? Whilst the range of AI tools include text-to-speech (and vice-versa), language translation, pattern matching, data extraction, text processing and various data science related tools, in our scenario it's a form of image processing that is most relevant. Potential benefits we can unlock here include:
  • Images (and the associated incidents in our case) become searchable when we have some textual data for them. Unless some interpretation has been performed, any search capability (including Office 365) is unable to determine what the various pixels and colours represent
  • Images/incidents can be categorised once the application knows what they relate to
  • Some automated 'triage' is possible once we've turned the image into information. Using the example described earlier, if the AI does identify concepts such as "casualty" or "injury" our system would take specific action - even if the process was simply to route these incidents for urgent human processing and/or we accepted that some would be false positives, there could be huge benefits across the a busy system
  • AI can easily process large amounts of historic data. So if I already have a repository of existing files where I want to perform some automated image processing (or in the case of other files, pattern identification/translation/text to speech generation etc.), I can do that easily even if I didn't have the capabilities back when the app was first introduced
Overall, there are an almost unlimited array of possibilities here.

Anyway, back to the image processing. Here are some examples - using a couple of images captured with a test Power App I created to perform the above functions and one from the internet. For each one, consider the image and what that the AI detected:

Image

Result

Image

Result



Image

Result


I think there are some pretty amazing results there. Consider some of the objects and concepts being recognized - public transport, subway, conference room, emergency services, police - that's quite a lot of intelligence to be able to successfully detect those contexts, and to have this power available to you within easy reach opens the door to lots of innovative solutions. How could you use it in your organization?

A Power App for incident reporting

So that's what the back-end can do. But on the front-end, our scenario would need a means of capturing the image and reporting the incident. Here's a quick Power App I created to do this - it uses the camera control with Power Apps and allows me to plug in any of the three architectures we're looking at in this series:

In the maker view, it looks like this (featuring me hard at work, because the front-facing camera on my Surface Pro is activated):

Summary

So that lays the groundwork for this series. AI is one of those topics that gets a lot of coverage, but I see lots of organizations struggling to make practical use of it. In these articles my aim is to show some approachable methods which can add real value to a common business scenario, and between the options there are a few variables to consider - capabilities, cost profiles, skillset required to implement to name a few.

Having a good understanding of the options provided to you in the Microsoft stack can help you bring real innovation to your organization or clients. In the next article we'll go through our 3 implementation options in close detail, so you can see what's needed to tap into AI in these ways.

Next article - AI in Office 365 apps - choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate

Friday, 31 January 2020

SharePoint Conference 2020, Las Vegas - my thoughts (and session details)

I’m excited to be speaking again at this year’s official SharePoint Conference, held by Microsoft in Las Vegas in May. SharePoint is still a huge focal point of Office 365, but like other SharePoint events it’s certainly the case that the whilst “SharePoint” is in the name, the truth is that the content extends to most of Office 365 and many aspects of Azure too. I think a few reasons combine to make this an *extremely* interesting time to be talking about SharePoint in context of other elements of Office 365. For most of us of course, this is against a backdrop of how to provide the best tools to an organisation and what a digital workplace should look in these times, and specific areas for me include:

  • The relationship between Teams and SharePoint, and how the two can be combined to provide amazing experiences
  • Project Cortex, the forthcoming toolset that we can expect to be a step-change to how knowledge is generated, discovered, managed and evolved within an organisation
  • How to exploit the Power Platform whilst staying in control of how data is used, and being able to provide effective support for end-user developed apps
  • How to provide a world-class modern workplace, based on an approach that is more than just technology. What specifically are the practices that work in achieving business change and great adoption of the tools, and allow the business to hit the objectives of the program?
  • The forthcoming “new Yammer”, with the move towards community-oriented groups with a different feature set with interesting mobile capabilities
  • Changes in the development landscape, including greater capabilities in the SPFx platform and the broadening out into other areas of Microsoft 365 development. The ability to think beyond Teams and SharePoint, and to understand what kind of experiences can be provided across Office apps is a huge opportunity for most organisations
  • The need for a considered, appropriate security posture
  • Moving forward with AI, so that it becomes something that is weaved into your applications rather than just something discussed aspirationally. High-end data science tools in Azure are one thing, but what about the easier to achieve possibilities across SharePoint, Power Apps and Power Automate? What are they, and how can they be used without developer skills?
I keep saying that we’re in a “what a time to be alive!” period with Microsoft technologies. At an event like this, being able to hear about key developments from Microsoft execs and program managers, as well as some of the best practitioners in the field (and me!), is a great way to accelerate learning and position yourself to drive your organisation or clients forward.

Speakers and sessions

The conference will host over 200 sessions and 20 workshops, with over 100+ exhibitors on the show floor. Speakers from Microsoft include Jeff Teper, Jared Spataro, Dan Holme, Bill Baer, Mark Kashman, Vesa Juvonen, Naomi Moneypenny, Murali Sitaram, Navjot Virk, Karuana Gatimu and many more. There’s a long list of very talented speakers from the industry too, including Andrew Connell, Susan Hanley, Benjamin Niaulin, Eric Shupps, Erwin van Hunen, Paolo Pialorsi, Sebastien Levert, Vlad Catrinescu, John White and more.

The conversations will be great, and I know the people above are always willing to talk in corridors and around the conference.

My session

I’ll be delivering my popular “Office 365 dev hitlist” session – here’s the blurb:

Top Office 365 Development techniques to master - the hitlist:
Things move fast in Office 365 development, and as Microsoft evolve the platform and APIs, new techniques and approaches become available all the time. As the head of a talented dev team, I regularly update my list of techniques that I believe are essential to have good capability building Office 365 solutions. Between SPFx, the Graph, Teams development, coding in Azure Functions and building solutions in PowerApps and Flow, let's walk through high-value scenarios in mid-2020 that should be in every experienced coder's toolbox, with a demo and code sample for each. This session will help you close any skills gaps, and should be a great conversation with some bright minds in the room.

Use code “OBRIEN” for a $50 discount

As usual with this event, if you sign-up and use my surname as the discount code, you’ll get $50 off the ticket price - and of course, the organizers get to know which speakers attendees are interested in. Since this is Las Vegas we’re talking about here, so I’ll be amazed if you can’t find a good use for that $50 😉

Even simpler than typing my name into the box on the form is to use this link which will do it for you http://obrien.spc.ms. Clicking on the image below will take you there too:

More details on the conference

The conference website is at https://sharepointna.com, and has all you need to know about the event, location, pricing, hotels and more. You can also tap into "SharePointTV", which has some great content streamed most Wednesdays going forward.

Hopefully see you at the event!

Wednesday, 8 January 2020

Improving Power Apps governance and analytics

Some of the work I’ve been doing (alongside some talented colleagues) recently is around improving governance of Power Platform use within an organization – in particular Power Apps, since a lot of the risk tends to be centred there. It’s becoming increasingly common that the Power Platform can be be widely-adopted within a company (or at least, adoption is growing), but a whole new set of problems become apparent in terms of exactly who is doing what with which apps. In some ways, ungoverned use of Power Apps and Power Automate can become a free-for-all in an organization – and this can cause serious operational problems if a critical app created by someone in the business has problems, or that person changes role or leaves the company. In many cases, people 'out there in the business' can create apps which become critical and users expect I.T. to provide support – but I.T. have a blind spot to what is happening in this area and don’t know the first thing about the application.

Common questions are:

  • How do I.T. get on top of this? How do we become aware of which apps exist already, and which are being created?
  • How do we discover whether apps are connecting to Azure, SQL, SharePoint, or perhaps even SAP, Workday, ServiceNow or ungoverned cloud services such as Dropbox?
  • Which accounts are used for connections? Are they appropriate?
  • Are we protected if that person who made the app leaves the organisation or moves to another role?
We’ve done some work around this with some organizations, partly based on the Power Apps COE starter kit. However..

The Power Apps COE starter kit is not a turnkey solution

Whilst the COE starter kit is a great baseline, as the name suggests this isn’t a turnkey solution. For one thing, unfortunately it’s based on CDS which means immediately that every maker of a significant app in your organization may need additional licensing just to use the governance solution! This seemed crazy to us, since many makers within our clients are using Power Apps extensively, but focusing on apps which only talk to Office 365 sources – and so these users would not therefore would not have a ‘per app’ or ‘per user’ plan license.

So, we decided to create a fork of the COE kit which is re-engineered to store data in Office 365. This side-steps this and provides some additional benefits over the baseline, and is the solution we’re using with our clients.

A peek at what the solution provides

The solution provides quite a few things around analytics, doing a lot of data collection in the background (including app launches from the Office 365 audit logs – another area we had to tweak) to provide a Power BI dashboard which provides some very useful insights. Here's just ONE screen from it, but the tabs across the bottom give you an idea of what else is in there:

The governance framework also introduces the idea of ‘compliant’ and ‘non-compliant’ Power Apps to your organization. This consists of a few things, including requests to app makers to provide a mitigation plan for their app. Exactly what should happen if the app becomes unusable or an update breaks something? Since I.T. aren’t necessarily in a good place to provide SLA support, having these things in place can de-risk the situation significantly across the enterprise. Administrators get to see a traffic light rating of each app, as well as having built a good understand of each app including it’s connections and data sources, the environment used, the usage patterns, the maker(s) and users and so on.

We also do a lot more to facilitate a well-functioning Power Apps maker community – including creation of a Power Apps Knowledge Center site with some policy content we wrote, and using the data we collected about makers, creation of a Yammer group or Microsoft team where all the top makers are invited and introduced to each other. From there, they have a place to share experiences, ask for help and so on.

A podcast interview about this

If you want to hear more, I was interviewed recently by Jeremy Thake for the Microsoft 365 Developer Podcast on this. In fact, we only decided that this would be the topic at the last minute, but I think it came out OK! If the embedded version below doesn’t work, the direct link is:

https://www.podbean.com/eu/pb-s8iq2-cab99d

Tuesday, 26 November 2019

My sessions at ESPC19

The 2019 edition of the European SharePoint, Office 365 and Azure Conference takes place in Prague next week, and I'm looking forward to delivering talks there. As usual, the event is easily the most important in the Microsoft cloud space on this side of the world, and there'll be great representation from Microsoft starting with keynotes from Jeff Teper, Dan Holme, Scott Hanselman, Alex Simons, Vesa Juvonen and others. In addition, other Microsoft leaders and Program Managers will deliver sessions on Project Cortex, the Microsoft Graph, Content Services, Teams, Yammer and much more - the Microsoft at ESPC19 page has a long list of big name speakers making the journey from Redmond! Over 2500 attendees are expected, and the conference should be a great mix of sessions and networking with many experts, partners and vendors in the Microsoft cloud space.

My sessions

I'm delivering the following two talks:

Using Azure Dev Ops for development or support teams (Thursday 05 December, 14:00)

In many organisations, the workload of a technical team is still managed in SharePoint lists, Excel, Planner or Microsoft Project. However, Azure Dev Ops Boards offer a much better, more visual way for many groups – and it isn’t just for highly-technical dev teams on agile projects. Various types of backlogs and boards exist – these include a Kanban board to suit a team working on a regular pipeline of requests (e.g. a support or help desk team) and a scrum board to support agile development projects (e.g. using sprints and a sprint planning process). There’s something in there for most types of teams and work.

This session will start at the beginning by showing the different types of boards and backlog, and move on to customising things to really suit how your team works. Over the course of several demos we’ll cover various hints and tips, and also consider how to take things to the next level with automated builds (with DevOps Pipelines) that tie into work item tracking.


Top Office 365 Development Techniques to Master - The Hitlist (Tuesday 03 December, 11:45)

Things move fast in Office 365 development, and as Microsoft evolve the platform and APIs, new techniques and approaches become available all the time. As the head of a talented dev team, Chris regularly updates his list of techniques that he believes are essential to have good capability building Office 365 solutions. Between SPFx, the Graph, Teams development, coding in Azure Functions and building solutions in PowerApps and Flow, let’s walk through high-value scenarios at the end of 2019 that should be in every experienced coder’s toolbox, with a demo and code sample for each. This session will help you close any skills gaps and should be a great conversation with some bright minds in the room.

There's still time to register!

The conference will be a great opportunity to get the latest on Microsoft cloud strategy, roadmaps and tips from the field. If you're there please come and say hello, and if not you can still register at the following link!

https://www.sharepointeurope.com/pricing/

Sunday, 17 November 2019

My Ignite 2019 announcement slides (plus selected roadmaps)

I recently created some "one page summary" slides of key announcements from Microsoft's recent Ignite 2019 conference and published them to Twitter - but I thought it could also be useful to compile them into one post here. Feel free to use them if they're useful to you - of course, they're my personal take on what I think is important and you might have different priorities, but given the firehose of announcements I always think consolidated summaries can help. Certainly I find the process of parsing the announcements to create the slides helps me to frame Microsoft's overall positioning and progress.

In this post I'm providing three things:
  • My one page summary images
  • Some assorted Microsoft roadmap slides I collected (images)
  • A PowerPoint deck containing all of the above (hosted on SlideShare)

My Ignite summaries  


I collated announcements for Teams, Power Platform, SharePoint and Azure:






Roadmap slides

These are some selected roadmap slides I collected for topics relevant to me:







SlideShare deck

You can download the original slides for all of the above in one PowerPoint deck below:

Tuesday, 15 October 2019

Quick tip - search across sites under a SharePoint hub site

SharePoint Hub Sites are an important aspect of building a modern digital workplace on Office 365. They provide a way of relating sites to each other, and since a modern SharePoint intranet is built from a flat list of sibling site collections which are siblings, such mechanisms are useful to impose a logical structure and coherent user experience. Of course, the idea is that this model deals very well with organizational change, since the association between a site and a hub can be easily updated - if restructuring becomes necessary, it's trivial to update a site to belong to a new hub (something which could not be said of the subsite model heavily used in earlier SharePoint models).

As a reminder, hub sites provide the following:

  • Shared navigation
  • Shared branding 
  • Scoped search (i.e. the ability to search across all sites associated with the hub)
  • A 'top of the tree' destination for the hub

Search and hub sites

There are a few scenarios you might be interested in when it comes to search and hub sites. Everybody's favorite technical SharePoint search person Mikael Svenson (now a part of the search team at Microsoft) has some useful samples at Working with Hub Sites and the search API, including how to use search to list out all your hub sites, including how to filter to return only those with at least one associated site.

However, one thing that's not mentioned there is the fact that you can perform scoped searches for items within all sites associated with a hub. For example, if I have a set of project sites that live under a projects hub, perhaps I'd want to show search results (or provide a custom search page) which restricts result to pages and documents within this project site 'structure'.

This works because Office 365 does something useful here, in that each piece of content that is indexed picks up a tag to indicate which hub it is associated with - in other words, this is propagated down to the item level in SharePoint. The Managed Property 'DepartmentId' is used for this - it contains the ID of the parent hub site.

This is shown in the search query tool below:

This makes sense when you think about it, since Microsoft provide a search across all sites belonging to a hub when you use the search bar in a Hub:


...and the DepartmentId managed property is the mechanism used there.

Sample search queries

Some examples of how you might use this (once you know the ID of the hub you want to scope searches to) might be as follows. NOTE:- in these examples, the GUID specified is the Site ID of the Hub Site in my tenant I want to use. You'd need to drop the appropriate value for your scenario in there:

Show me all pages or documents from any site under this hub:

IsDocument:True AND (DepartmentId:{545621ea-2334-45c2-903b-3b9b93be38ee} OR DepartmentId:545621ea-2334-45c2-903b-3b9b93be38ee)

 Show me news from any site under this hub:

IsDocument:True AND PromotedState:2 AND (DepartmentId:{545621ea-2334-45c2-903b-3b9b93be38ee} OR DepartmentId:545621ea-2334-45c2-903b-3b9b93be38ee)

It seems that sometimes the value can contain braces and sometimes it may not (given how Microsoft themselves search using this property) - so, it may be safer to provide both formats as shown in the queries above.

Search queries used by Microsoft in hub searches

Let's take a closer look at the search terms/clauses used by Microsoft in their native functionality for searching across a hub. Taking a look in browser dev tools shows queries like those shown below (with URLs and IDs from my dev environment shown here).

Full query ("All" tab):

QueryModification: "(marketing) (-ContentClass:ExternalLink AND -FileExtension:vtt AND -Title:OneNote_DeletedPages AND -Title:OneNote_RecycleBin AND -SecondaryFileExtension:onetoc2 AND -ContentClass:STS_List_544 AND -ContentClass:STS_ListItem_544 AND -WebTemplate:SPSPERS AND NOT (ContentClass:STS_Site AND SiteTemplateId:21) AND NOT (ContentClass:STS_Site AND SiteTemplateId:22) AND NOT (ContentClass:STS_List_DocumentLibrary AND SiteTemplateId:21) AND NOT (ContentClass:STS_List_DocumentLibrary AND Author:"system account") AND NOT IndexDocId=17592721738413) AND NOT Path:"https://chrisobriensp-my.sharepoint.com/personal/cob_chrisobrien_com/" AND NOT Path:"https://chrisobriensp-my.sharepoint-df.com/personal/cob_chrisobrien_com/" AND (DepartmentId:{545621ea-2334-45c2-903b-3b9b93be38ee} OR DepartmentId:545621ea-2334-45c2-903b-3b9b93be38ee) -ContentClass=urn:content-class:SPSPeople"

 Full query ("Files" tab):

QueryModification: "(marketing) (DepartmentId:{545621ea-2334-45c2-903b-3b9b93be38ee} OR DepartmentId:545621ea-2334-45c2-903b-3b9b93be38ee) AND (NOT (Title:OneNote_DeletedPages OR Title:OneNote_RecycleBin) AND NOT SecondaryFileExtension:onetoc2 AND NOT FileExtension:vtt AND NOT ContentClass:ExternalLink AND NOT (ContentClass:STS_List_DocumentLibrary AND SiteTemplateId:21) AND NOT ((filetype:aspx OR filetype:htm OR filetype:html OR filetype:mhtml)) AND isDocument:1 OR ((ContentTypeId:0x010100F3754F12A9B6490D9622A01FE9D8F012 OR ContentTypeId:0x0120D520A808*) OR (SecondaryFileExtension:wmv OR SecondaryFileExtension:avi OR SecondaryFileExtension:mpg OR SecondaryFileExtension:asf OR SecondaryFileExtension:mp4 OR SecondaryFileExtension:ogg OR SecondaryFileExtension:ogv OR SecondaryFileExtension:webm OR SecondaryFileExtension:mov)) OR (FileType:ai OR FileType:bmp OR FileType:dib OR FileType:eps OR FileType:gif OR FileType:ico OR FileType:jpeg OR FileType:jpg OR FileType:odg OR FileType:png OR FileType:rle OR FileType:svg OR FileType:tiff OR FileType:webp OR FileType:wmf OR FileType:wpd) OR (ContentTypeId:0x012000*)) AND NOT (ContentTypeId:0x0101009D1CB255DA76424F860D91F20E6C4118* AND PromotedState:2) AND NOT Path:"https://chrisobriensp-my.sharepoint.com/personal/cob_chrisobrien_com/" AND NOT Path:"https://chrisobriensp-my.sharepoint-df.com/personal/cob_chrisobrien_com/" -ContentClass=urn:content-class:SPSPeople"

As you can see, that's a lot of exclusions needed to filter the results down to just the useful ones!

Summary

Hub sites provide a useful construct in Office 365/SharePoint Online, and understanding how to search across all sites within a hub can be a valuable technique when building solutions. The key is to take advantage of the fact that Office 365 puts the parent hub site ID into the 'DepartmentId' managed property on each item in the search index. Knowing that, you can build all sorts of custom solutions around hub sites!

Tuesday, 10 September 2019

How does it affect my tenant? Questions to ask for Office 365/SharePoint solutions

I’m doing some Office 365 architecture work at the moment to ensure our clients have a clear view of our solutions and how they work. Any organisation working with a partner or deploying products based on Office 365 needs to ask “how does it affect my tenant?” – but of course, that’s just the high-level question. Underneath are a whole series of security, compliance, operational, cost and other governance considerations that should be explored for robust long-term successful use of Office 365. However, I find that many organisations don’t yet know how to ask the right questions or dig into the right areas – and this can lead to risks of compliance issues, data leakage, impact to organisational productivity or any number of other problems down the line.

So, I thought it would be good to share a starter checklist of topics and questions to ask related to solutions which extend Office 365 in some way. The focus is on SharePoint-based customisations, and it’s not a comprehensive list – you could certainly broaden the scope, or go deeper in specific areas by asking more questions. But hopefully my list gives a sense of the type of questions you should ask before accepting a solution into your environment, or on the other side, the kind of thing you should be providing as a trustworthy vendor/partner/internal development team in your documentation.

For larger enterprises, my work often involves more than documentation. Commonly, I’m walking various technical teams and service owners through the low-level detail of our solution, and each needs to grant approval before our solution can pass. I’m currently working with one of the world’s largest companies, and we need around 10-12 teams to approve their respective bits, and then to also pass a “unity review” which considers the overall picture. I’m using a combination of PowerPoint and live demos on the conference calls with the respective teams, and it’s all about socialising the concepts and ensuring the low-level detail is covered and all questions are answered. I do think the onus is on the implementation team to proactively “push” the information – as noted earlier, in-house teams don’t always know the right questions to ask. If you have your client’s best interests at heart though, you’ll want them to be well-informed.

My suggested checklist

What does the overall architecture look like?

  • Which Office 365 services are used?
  • What does the architecture diagram(s) of your solution look like?

User data/user profiles

  • Are any custom fields added to AAD profiles?
  • Are any custom fields added to SPO profiles?
  • How does data get into the fields? Is there a sync solution, or do users edit their values through native Office 365 or custom interfaces?

Security

  • Are any changes to the “Custom script” settings proposed (i.e. is there any ad-hoc JavaScript running outside of modern pages and the SPFx security model)? HINT – you really want both of these set to “prevent” if possible:



  • What SPFx permissions are required? (as shown on the Web API permissions page)?



  • Are any AAD app registrations created (related to use of the Microsoft Graph API)?





  • What 3rd party libraries are used by the solution? Can you show us a current npm audit report?
  • What certificates are used by the solution?
    • How are they managed and where are they deployed to?
    • What is the expiry date of any certificates and how is rollover handled?
  • If remote code elements are deployed (Functions or other web APIs), what authentication model is used?
  • Are any code elements or artifacts *not* hosted in our tenants/subscriptions? If so, what purpose do they serve? How can we be sure our data isn’t passed to them?

Taxonomy

  • What term groups and term sets are provisioned?
  • How are they used/what purpose do they serve?
  • Are they open or closed term sets?
  • How are they identified? Will my users confuse them for something else?
  • Are they a duplicate of something already in place?

Search configuration

  • What new managed properties are provisioned?
  • More importantly, are any mappings of existing out-of-the-box properties provisioned?
    • How can I be sure any use of RefinableStringXX, RefinableDateXX, RefinableIntXX and so on don’t clash with existing mappings in use in my tenant?
  • What result sources are provisioned?
  • What other search configuration is provisioned?

Teams

  • What aspects of the Teams dev platform are used (e.g. tabs, connectors, bots, messaging extensions)?
  • What permissions/device permissions are used?
  • What remote domains are used? What for?

Office Add-ins

  • If Office add-ins are used, what are their permission requirements (e.g. “ReadWriteMailbox”)?
  • Is Centralized Deployment used to deploy the add-ins? If not, why not?
  • Which Office applications do the add-ins surface in?

Azure

  • What is deployed to Azure? Is it just PAAS elements or are there any IAAS elements?
  • If Azure Functions are used, are they on the Consumption Plan or App Service Plan?
  • What are the forecast costs? What could vary this?

SharePoint content (see note below)

  • What site types are used (e.g. communication sites, hub sites, team sites etc.)?
  • Are any classic SharePoint elements used (e.g. classic publishing sites)

SPFx

  • What SPFx packages are used?
  • Which are deployed tenant-wide, and which are per site collection?
  • [Also see Security section for considerations on SPFx security]

Notice that the list does not focus too much on SharePoint content. If the solution gets deployed into an existing SharePoint site collection (or several), you might ask questions about the content types and site columns and so on which are provisioned. In most cases however, what goes on within individual SharePoint sites is a lower-level concern where the risks are much lower (at least in SharePoint Online).

Summary

Successful use of Office 365 in the enterprise relies on balancing productivity and agility with security and governance, and part of this involves knowing exactly what’s in your tenant and how it works.

Just because you no longer own and manage the physical servers doesn’t mean that a breach isn’t possible or your data cannot be stolen. Office 365 is designed with customisation in mind (at least, it is these days) and there are many many security features which are designed to ensure that the platform can be extended in safe, governable ways. However, you do need to make sure that you use them - and I think it’s fair to say that some older vendor/product Office 365 solutions that haven’t been updated do not.

No doubt there are some useful products which can help with solution governance in this way, but I’m not sure any of them cover all of the aspects listed here. Hopefully this checklist can help!

Thursday, 18 July 2019

Office 365 dev - tips for building a custom SPFx form

One project I worked on recently involved building a custom form as an SPFx web part – and in general I think this is a great option for “high-end” forms which need to have quite specific behaviour and/or look and feel. Also, the ability to designate the page as an SPFx application page means some of the unnecessary page furniture is removed, which is helpful. When thinking about forms in Office 365, I see a spectrum of options which looks something like (in order of increasing complexity/effort):

  1. Out-of-the-box SharePoint list form 
  2. PowerApps customised SharePoint list form 
  3. Canvas PowerApp, embedded in a SharePoint page with the PowerApps web part 
  4. Custom-developed form with SPFx and React 
So, we’re talking about the last option here.

The good news is that for an adequately-skilled SPFx developer, building one of these forms can be pretty quick - there are some very powerful building blocks to help. I see the overall recipe as something like:
By combining use of these ingredients with some extra steps to help users get to your form (see later sections on this), you can get to a rich solution with less effort than you might think.

Form controls used with SPFx, and dealing with data

Thinking about the list of ingredients above, you'll find you can go a long way by bringing in those controls and integrating them with your data. I think the following arrangement might be common:

Requirement

Source

Standard textboxes, buttons etc. Office UI Fabric React
Dropdowns, including advanced formatting/behaviour Office UI Fabric React
Taxonomy picker PnP React taxonomy picker
People picker PnP React people picker
File upload control (stored as list item attachments) PnP React file upload control
File upload control (stored in another way) Other 3rd party control (you handle the upload to SharePoint)

Most likely you'll need to allow existing items to be edited with your form, as well as new items to be created. PnPJS is perfect for simplifying the operations to fetch an existing item (typically from an item ID passed to the page), and also saving back to SharePoint. In your SPFx web part, you'll use React state as the go-between in the middle of your controls on the front-end, and your service code for the data layer.

In the end, my form looked something like this (shown here zoomed-out). You might notice a series of buttons at the top - these surface functionality such as "Save as draft", and although I think use of something like Office UI Fabric's CommandBar or ContextualMenu would be nice, the client preferred straight-forward buttons in this case. Otherwise it's just use of the controls described above:


Providing the edit item experience


Assuming your custom form stores items in a SharePoint list, you'll probably want to take some steps to integrate the default list experience with your form. A good example is ensuring that when a user edits an item, they are taken to your form rather than the out-of-the-box SharePoint list edit form. This can be accomplished by adding a column to your list with some JSON formatting defined to provide the link. Simply take the URL for your page (i.e. a page that you created and added your web part to), and use it in the JSON for the link. You should end up with something like this:

I used JSON like this:

Notice that I'm passing the current list item ID to my form. Within my SPFx web part, I have code which is looking for a value being passed in the 'itemId' URL parameter - if one is found, as described above I use PnPJS to:
  • Fetch data for this item
  • Set React state with these values, so that my form controls get set to these values
So that takes care of the edit experience.

Providing the 'new item' experience or customising new/edit/disp forms in modern lists


In my case, the new item experience can be provided simply by a big visual representation on the home page. Regular end-users do not use the list, and won't be pressing the 'new' button. In this case, things are simple. Chatting to colleagues (thanks Leo!), it *is* possible to override the new/edit/disp forms of a modern list, but currently there are issues if a modern page is the target and you want to pass any parameters in the URL (e.g. the item ID for the edit experience) - apparently something breaks completely and your form is unlikely to load. One approach which can work is to create a classic page, set the new/edit/disp form URLs of your list accordingly, and add JavaScript on the classic page to redirect on to the modern page hosting your SPFx form. You may need to hash/further encode any ID you're passing, and consider adding CSS to hide some page elements in case the user briefly sees them during the redirect process. That's about as good as it gets in summer 2019 apparently, and there's a UserVoice entry Allow us to develop custom modern forms with custom edit experience to request better integration - the good news is the status of this has recently changed to "Thinking about it". So, hopefully this story will be improved soon for those needing tight integration between your custom form and the out-of-the-box list UI.

Setting the page to be an SPFx app page


Once you've developed your web part, you'll want to make sure it works well on the page - and typically you'll want to remove some of the "page furniture", so that your form is the focus and there are fewer distractions. On a modern page, you can do certain things like set the title area to the "Plain" layout - this will reduce the header area, but if your form has some kind of header/title you'll still get duplication between this and the page title.

And what about the problem that a page author could accidentally edit the page and remove the web part?

Both of these problems can be solved by converting the page to be an SPFx app page. On one of these, only a single web part can be used, it's properties cannot be edited and the title area is removed. The two images below compare my form before being converted to an app page (left) and afterwards (right):

As you can see, the title area is removed. Also, if I flick the page into edit mode, the web part cannot be removed and the only options I see relate to the page title:

So, less chance of an authoring 'accident' happening to your form now. If the page wasn't originally created as an app page, converting can be done by PowerShell or even by some quick JavaScript in the browser console as a one-off. See here for more details - https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/single-part-app-pages. Of course, you could also remove the left navigation if you like too.

Using the PnP Taxonomy Picker control in such a form


One area I wanted to talk about here was a quirk in the PnP Taxonomy Picker control (a common control for these forms). Like many others in this toolkit, the control itself is excellent and would be a significant task on it’s own if you had to write it. Let’s all buy Elio a(nother) beer next time we see him 😉

The taxonomy picker is easy to add to your web part, and it takes care of populating itself with terms from a term set that you provide (by name or ID). A typical declaration looks something like this:

<TaxonomyPicker allowMultipleSelections={false} termsetNameOrID="b2a79b4b-9d64-46d9-8a94-38f8809f8d12"
  panelTitle="Select Region"
  label=""
  initialValues={this.state.selectedRegion}
  context={this.props.context}
  onChange={this._onRegionTaxPickerChange}
  isTermSetSelectable={false} />

That would give something on your page like this:

A couple of other notes would be:
  • initialValues – use this property to set the value of the control. So if your form is being used to edit an existing list item, fetch the value from data and set this property – since I’m using React, this simply comes from a value in my web part state
  • onChange – what should happen when a term is selected. In React, you’d usually store the value in state at this point, ready for later insert/update to SharePoint
So far, so straightforward. However, something to be aware of is that the onChange() method provides the item in the form of an IPickerTerm object – something provided by the PnP code. However, such an object cannot be passed to SharePoint using PnPJS or REST, since the REST API expects a different format. IPickerTerm has the following properties:
  • name
  • path
  • key
  • termSet
  • termSetName
However, PnPJS or the SharePoint REST API expect an object with the following properties (this is for a single-value taxonomy field):
  • TermGuid
  • Label
  • WssId
Sidenote - things are a bit more complex if your field stores multiple values. Alex Terentiev has an article on this that may help you out.

So, you’ll need to some mapping between the two object types if you’re reading/writing data to SharePoint from the PnP taxonomy control (including setting the selected value of the control to the current value stored in a SharePoint list item).

Some important things to know here are:
  • A value of -1 can be used for the WssId value – you don’t need to do anything more, SharePoint will sort this out
  • The TermGuid property maps to the IPickerTerm.key property
  • The Label property maps to the IPickerTerm.name property
With that in mind, we just need some code to map between the two object types - I found that having two methods such as the following is needed (to convert in each direction):

Summary


Hopefully this post has been a useful round-up of considerations when building custom SPFx forms. I think this approach works great for more complex forms, and the building blocks listed here really do help reduce the amount of code required. Setting the page to be an app page to eliminate unnecessary page furniture helps, as does integrating with the SharePoint list UI for the new/edit/display experience. In addition to Office UI Fabric, the PnP React controls are all extremely useful in SPFx forms and the TaxonomyPicker is no exception. If you use that one, you'll probably find you need some code like my sample above to help you map between the format used by the control and that used by the SharePoint REST API or PnPJS, but that's not too complex. Happy coding!

Wednesday, 19 June 2019

Office 365 dev tips – working effectively with the Microsoft Graph and other APIs

I’ve been speaking at various conferences recently about key skills for extending Office 365 – things that an effective architect or developer needs to have in the toolbox these days. Clearly the ability to work with the Microsoft Graph needs to be high on this list, given that it is after all, “the API for Office 365”. It’s great to have one (mainly!) consistent way to work with Teams, SharePoint, mail, calendar, Planner, OneDrive files and many other functions too, but between the various authentication options and coding styles, there’s quite a bit to know.

One thing I find is that developers are perhaps not effective enough with the Graph yet – partly because some things have changed recently, and partly because some devs are only now coming to the Graph from use of say, the SharePoint REST/CSOM APIs and other service-specific APIs.

So my key messages here are:

  • The Postman collections provided by Microsoft can help answer your questions about the Graph and integrate calls into your code
  • The SDKs help more than you’d expect in your code – having types in TypeScript being one example
Why are these things important?

Tip 1 - Using Postman collections instead of Graph Explorer


Graph Explorer (GE) is great for developers to start seeing some calls that can be made to the Graph and what kind of data comes back. If you’re new to the Graph, check it out at https://developer.microsoft.com/en-us/graph/graph-explorer. However, for intermediate and advanced developers, I recommend using Postman over Graph Explorer. Here’s why:

Downsides of Graph Explorer
One drawback of GE is that even though you can sign-in with your account and therefore work with your own Office 365 tenant, this is still not the same thing that your code will be doing. An Azure AD app registration is always needed for code to call the Graph, and GE does NOT use your app registration – meaning different permissions are used. So, it’s common to run into issues when you move from Graph Explorer to your code, usually related to not having correct permission scopes allowed on the app registration. Using Postman allows you to use your app reg, so it’s exactly the same as your code.

Something that makes Postman very powerful with the Graph is that Microsoft have released Postman collections for all the current Graph methods – including those in beta. You can import these into Postman and you’ll then see an item for every method, broken into two folders for app-only calls and on-behalf-of-user calls:

This is great for discoverability! Now, every time I wonder if the Graph allows me to get something, instead of clicking through pages of documentation I just come here and start expanding folders:
I get to see all of the Graph workloads and what data can be retrieved very quickly indeed. The process of getting started with Postman and the Graph collections is a little bit involved - it's outlined on the Github repo where you can obtain Microsoft's Postman Graph collections, but I think it can be helpful to watch someone go through the process, so I made a video with some captions:


The process starts by obtaining the details of an AAD app registration which I'll use, so you should create one (with the appropriate permission scopes defined) if you don't have one already. Overall, the process is:
  • Import the collections
  • Configure various Postman variables
  • Obtain access tokens for app-only and "on behalf of user" calls, and store in other Postman variables
  • Enjoy your new quick access to the Graph! You can start executing the calls and seeing data coming back at this point

It's fairly straightforward to integrate the call into your code, and you won't have any security/permission issues because you've already tested the same thing your code will do.

UPDATE - actually, I realise now that Jeremy also has a video. See https://www.youtube.com/watch?v=4tg-OBdv_8o for that one.

So that's great - we can now see all the URL endpoints in the Graph, what data they expect and what data they return. But how do we ensure we're working effectively with the Graph once we're in code?

Tip 2 - ensure you're using TypeScript types


For many of us, the place that we'll be coding against the Graph will be SPFx or other TypeScript code - perhaps even an Azure Function based on node.js. The key thing to avoid here is use of “any” in TypeScript – we can do better than that, whether it’s another 3rd party API or the Graph itself. It’s common for APIs to return fairly complex JSON structures, and we can avoid lots of coding errors by ensuring we get type-checking through the use of types (interfaces or classes) representing the data. Additionally, having auto-complete against these data structures makes the coding process much faster and more accurate.

Using the Graph in TypeScript

When using the Graph in SPFx, the first thing to note is that your project will not have the Graph types added automatically (e.g. when the Yeoman generator creates the files) – you have to install a separate npm package. To do this, run one of the following:
  • npm install @microsoft/microsoft-graph-types --save-dev
  • npm install @microsoft/microsoft-graph-types-beta --save-dev
Yes, Microsoft kindly provide types even if you’re working the beta Graph endpoints. For most production scenarios, you’ll be using the first option (and staying to release endpoints) though. Once you have the types installed, you can import them to your SPFx classes with:

import * as MicrosoftGraph from '@microsoft/microsoft-graph-types';

It’s a good idea to only import specific entities that you’re actually using rather than *, but that needs some digging (to find the module name). You’ll be rewarded with a smaller bundle size however. Either way, once you’ve got the types imported you can start using them in your code – this changes things from having no zero support at all:
..to having full auto-complete as you type:
Hooray! Things are now much easier.

Wherever possible, don't settle for coding without this kind of support! But what if you're working with an API that isn't the Graph?

Using 3rd party APIs in TypeScript

The first thing to do in this case is establish if the vendor supplies TypeScript types (usually in the form of a npm package or similar). If so, just install that and you should get the support. If not, I like to use Postman to obtain the JSON returned and then use a Visual Studio Code extension such as QuickType to auto-generate types for me, based on that JSON. So you’d make the call to the API in Postman, and then copy the returned JSON to your clipboard.

In the example below, I'm using the Here Maps API. I set up the call in Postman by pasting in the URL endpoint to use, and setting up any authentication. I hit the send button, and then copy the returned JSON (in the lower pane) to my clipboard):
In Visual Studio code, I create a file to hold the interface types from the API, and then find the QuickType "paste JSON as code" option in my command palette:
I supply the name of the top-level type:
..and then QuickType generates the type structure which maps to the JSON. I might want to rename some of the types, but essentially I get interfaces for everything in the structure:

So that's great - this saves me a lot of cross-referencing and typing, and I can now use these interfaces in my calling code and get type-checking and auto-complete.

Summary


Whether it's the Graph or a 3rd party API, you should always code against defined TypeScript interfaces where possible. For the Graph, you should use Microsoft's supplied types by installing their 'microsoft-graph-types' npm package. For a 3rd party API, if no types are supplied then you can generate your own easily with help from something like QuickType.

Whatever you're doing, Postman is a hugely useful tool. In terms of the Graph, it's use gets you around the problem of Graph Explorer not executing with the same permissions as your code, and so is as valuable there as it is with 3rd party APIs.

Happy coding!