Monday, 16 November 2020

Automating location check-ins with Power Automate

Geofencing is the idea of a location service triggering an action whenever a device (i.e. a user's mobile device) moves in or out of an area, and there are lots of great use cases. On the personal/home automation side, you might want to automatically switch the house lights off and activate the alarm when you leave home (think IFTTT or Zapier if you're familiar with those), and on the work side your employer might provide an app which automatically checks you in (or out) of an office location dependent on your location - as just two examples. I was looking at the latter scenario for our internal desk booking app as part of our "controlled use of offices during Covid" plan. 

You can imagine lots of cases where knowing the user's location and whether it has changed can be powerful.

The Power Platform has a couple of ways to tap into this. I'd been researching the Location trigger in Power Automate, and this provides a Flow action named "When I enter or exit an area":

As is normal with geofencing, you specify a location anywhere in the world and create a radius around it. The location is approximate since GPS on the user's mobile device is used, but it works great for most purposes: 



Things were working great for a while, but then I ran into this:

The Location trigger has been removed from Power Automate!

The trigger was only ever in preview, and sadly in the few weeks since I had this article in draft (and had taken the above screenshots) it was removed. At this stage it's unclear whether it will return, and whilst there are traces of it on the internet there are no mentions of it within Microsoft documentation.

So that's a shame! To be quite honest, there was a challenge with the trigger anyway in the Power Platform - it could only ever be used for personal Flows, meaning each user who should use the capability would need to create their own Flow. Clearly this doesn't work for any kind of business solution provided by an organization, but could still be useful for personal automation.

How else can we automate based on user location?

The good news is that Power Automate is still able to understand the user's device location. A fully automated solution which is triggered simply by moving in or out of a defined region is no longer possible, but if you're happy to have users manually click a button on their device, similar automations are still possible. Indeed, in some scenarios this approach may be preferred so that there's a level of human control and opt-in - allowing the user to avoid triggering a process if the circumstances don't warrant it (e.g. leaving the area radius briefly to pick up lunch). 

So, let's take a look at how we can build Power Platform apps which account for the user's location.

Using Flow buttons to log a location visit

Flow buttons provide a great way to build a mobile app with a super-simple user interface - without having to delve into any kind of coding or the world of native iOS or Android development. In my example below, I'm using a simple button and a very simple form. But first, the prerequisites.

The first requirement for a solution like this is for users to have the Power Automate app installed on their mobile device. Your organization could push it out using a MDM or MAM solution, or it's available in both the Apple and Google app stores:

The user will need to sign-in to the app with her Microsoft 365 identity. The other important thing is that location services are enabled on the device for the Power Automate app - an obvious necessity if we are collecting and logging the location. 

Once in the app, the user would go to the "Buttons" area in app using he navigation bar along the bottom.

Any Flows which have been created using the "Manually trigger a Flow" trigger will appear in here: 



In my solution, I have a Flow to log details of a location visit - this is named "Report location status" in the image above, and you can also see some other Flows which also use the the button trigger. As you've probably gathered, these are known as "Flow buttons" and provide a really quick and easy way of manually triggering a process. No need to create and deploy a custom app - instead we can piggy back on what the Power Platform provides. 

When the button is clicked, some information can be selected to feed into the process. In my example of logging a location visit, the Flow asks for a "status" to be collected:


In the case of my solution, when the user submits this "location report" I store the details in a SharePoint list. Power Automate has done the hard work of automatically deriving the user's location at the time of pressing the button, and with a little column formatting magic I can display a small map for the location instead of just the address text:

 
 
So that's it! With fairly minimal effort in the Power Platform we have a mobile app which can collect the user's location, collect additional information and log it into a central store such as a SharePoint list in Microsoft 365.
 

How do we build that?

We've covered what the user would see, but what is needed in Power Automate to create this? We start by creating a Flow with the "Manually trigger a flow" trigger. Notice in my case I've added a single input named "Status", and supplied some help text:


You could actually stack several of these inputs and essentially create a mini-form which is presented to the user when they hit the button - which becomes quite powerful when you consider that no coding is required and we don't even need a Power App. 

The next step in the Flow is simply to log the item to SharePoint. I have a list ready to go with appropriate columns, and I just need to configure the Flow action to store the data in each:

 
The important thing is that several tokens are available from the trigger, including:
  • User name
  • User email
  • Timestamp
  • Date
  • fullAddress - this is the full address of the auto-derived location of your user
  • Many address sub-components:
    • Street
    • City
    • State
    • Postal code
    • Country
    • Latitude
    • Longitude
  • Any inputs you have added (e.g. "Status") in my case 
The final step in my Flow is to send a confirmation to the user that the report was logged successfully:


Which results in this appearing on the device:


So we've managed to capture the user's location and status report of the location, and confirmed back to them that the data was saved. 

Summary


The Power Platform has many amazing facilities for building applications, and that's especially the case for simple mobile applications. The ability to tap into device features such as location and the camera mean that you can build very capable apps quickly, without code - and certainly without all the hassles of native app development and distribution. In this post we looked at how Flow buttons can be used to quickly trigger a process from the mobile app, and how to capture the user's location at the time. 

Unfortunately the "When I enter or exit an area" Power Automate trigger that was in preview hasn't made it to release - but we hope it comes back because that would unlock some great scenarios around automation and the user's location. Come on Microsoft!

Monday, 21 September 2020

5 ways to use AI to supercharge your content in Microsoft 365

We’re all familiar with increasing rates of data growth, and practically every organization using Microsoft 365 has an ever-growing tenant which is accumulating Teams, sites, documents and other files by the day. The majority of organizations aren’t making the most of this data though – in the worst cases it could just be another digital landfill, but even in the best case the content grows but doesn’t have much around it in terms of intelligent processing or content services. As a result, huge swathes of content go unnoticed, search gives a poor experience and employees struggle to find what they’re looking for. This is significant given that McKinsey believe the average knowledge worker spends nearly 20% of their time looking for internal information or tracking down colleagues who can help with specific tasks. At the macro level, this can accumulate to a big drag on organizational productivity – valuable insights are missed, time is lost to searching, and opportunities pass by untapped.

In this post, I propose five ways AI can help you get more value from your data.

1. Use AI to add tags and descriptions to your images so they can be searched

Most organizations have a lot of images - perhaps related to products, events, marketing assets, content for intranet news articles, or perhaps captured by a mobile app or Power Apps solution. The problem with images, of course, is that they cannot be searched easily - if you go looking for a particular image within your intranet or digital workplace, chances are you'll be opening a lot of them to see if it's what you want. Images are rarely tagged, and most often are stored in standard document libraries in Microsoft 365 which don't provide a gallery view.

A significant step forward is to use image recognition to automatically add tags and descriptions to your pictures in Microsoft 365. Now, the search engine can do a far better job of returning images - users can enter search terms and perform textual searches rather than relying on your eyeballs and lots of clicks. Certainly, the AI may not autotag your images perfectly - but the chances are that some tagging is better than none. Here are some examples I've used in previous articles to illustrate what you can expect from the Vision API in Azure Cognitive Services:

 

Image

Result

Image

Result



Image

Result



2. Use AI to extract entities, key phrases and sentiment from your documents

We live in a world of documents, and in the Microsoft 365 world this generally means many Teams and SharePoint sites full of documents, usually with minimal tagging or metadata. Nobody wants the friction of manually tagging every document they save, and even a well-managed DMS may only provide this in targeted areas. Auto-tagging products have existed for a while but have historically provided poor ROI due to expensive price tags and ineffective algorithms. As a result, the act of searching for information typically involves opening several documents and skimming before finding the details you're looking for.

What if we could extract key phrases from documents and known entities (organizations, products, people, concepts etc.) and highlight them next to the title so that the contents are clearer before opening? Technology has moved on, and Azure's Text Analytics API is far superior to the products of the past. In my simple implementation below I'm simply sending each document in a SharePoint library to the API and then storing the resulting key phrases and entities as metadata. I also grab the sentiment score of the document as a bonus:  

A more advanced implementation might provide links to more information on entities that have been recognized in the document. The Text Analytics API has a very nice capability here - if an entity is recognized that has a page on Wikipedia (e.g. an organization, location, concept, well-known person etc.), the service will detect this and the response data for the item will include a link to the Wikipedia page:

There are lots of possibilities there of course!


3. Create searchable transcripts for old calls, meetings and webinars with speech-to-text AI

If your company uses Microsoft 365, then Stream is already capable of advanced speech-to-text processing - specifically, the capability which automatically generates a transcript of the spoken audio within a video. This is extremely powerful for recording that important demo or Teams call for others to view later of course. However, not every organization is using Stream - or perhaps there are other reasons why some existing audio or video files shouldn't be published there. 

In any case, many organizations do have lots of this type of content lying around, perhaps from webinars, meetings or old Skype calls. Needless to say, all this voice content is not searchable in any way - so any valuable discussion won't be surfaced when others look for answers with the search engine. That's a huge shame because the spoken insights might be every bit as valuable as those recorded in documents. 

A note on Microsoft Stream transcripts

Whilst Stream is bringing incredible capabilities around organizational video, it's worth noting that transcripts are not searched by Microsoft 365 search - only by 'Deep Search' in Stream. So if you've honed in on a particular video and want to search within it, Deep Search is effective - but if you're at step one of trying to find content on a particular topic, videos are not currently searched at the global level in this way.

Speech-only content comes with other baggage too. As just one example, it may be difficult to consume and understand for anyone whose first language is different from the speaker(s). 

The Azure Speech Service allows us to perform many operations such as:

  • Speech-to-text
  • Text-to-speech
  • Speech translation
  • Intent recognition

More advanced scenarios also include call recording, full conversation transcription, real-time translation and more. In the call center world, products such as Audiocodes and Genesys have been popular and are increasingly integrated with Azure's advanced speech capabilities - indeed, Azure has dedicated real-time call center capabilities these days

At the simpler end though, if your company does have lots of spoken content which would benefit from transcription you can do this without too much effort. I wrote some sample code against the API and tested a short recording I made with the PC microphone - I don't need to tell you what I said because the API picked it up almost verbatim: 

If we're splitting hairs, really I said this as one sentence so the first full stop (period) should arguably be a comma. Certainly this was a short and simple recording, but as you can see the recognition level is extremely high - and astonishingly, the API even managed to spell O'Brien correctly! 

Here's the code required to call the API, largely as described in the docs:


Enabling technology - Azure Cognitive Services - Speech API  (speech-to-text in this case) 

4. Use AI to translate documents 

The case for this one is simple to understand - an organization can have a variety of reasons for translating documents, and AI-based machine translation has advanced sufficiently that it's precise enough for many use cases. Working with international suppliers or clients could be one example, or perhaps it's because search isn't effective enough in a global organization - users are searching in their language, but key content is only available in another language. 

Azure allows you to translate documents individually or at scale through APIs or scripting, all in a very cost-effective way. I didn't need to build anything to tap into it, since a ready-made front-end exists in the form of the Document Translator app in Github - once hooked up to my Azure subscription, I'm ready to go. In this tool, if you supply a document you get a completed document back - in other words, pass in a PowerPoint deck and get a file with each slide translated in return - no need to paste anything back together. The Translator capability in Azure Cognitive Services allows you to tap into same translation engine behind Teams, Word, PowerPoint, Bing and many other Microsoft products, but also to build your own custom models to understand language and terminology specific to your case. 

My French is a bit rusty, but these look fairly good to me: 


Translation of documents you already have offers lots of possibilities, with improved search being just one example. But there are many other high-value translation scenarios too, such as real-time speech translation - something that's now possible in Teams Live Events. With Azure Cognitive Services, it's also possible to build the capability into your own apps without needing to use Teams, and you're tapping into the same back-end underneath.


5. Extract information from documents such as invoices, contracts and more

In one of the earlier examples we talked about extracting key phrases, entities and sentiment. In some cases though, the valuable content within a document is found within a certain section of the document - perhaps a table, a set of line items or grand total. Every organization in the world has loosely-structured documents such as invoices, contracts, expense receipts and order forms - but often the valuable content is deeply embedded and each document needs to be opened to get to it. With the Forms Recogniser capability in Azure, you can use either pre-built models for common scenarios or train a custom model yourself, allowing the AI to learn your very specific document structure. This is the kind of capability that is coming in Project Cortex (essentially a version of this tightly-integrated with SharePoint document libraries), but it may be more cost-effective to plug into the Azure service yourself. 

Some examples are:
  • Forms - extract table data or key/value pairs by training with your forms
  • Receipts and business cards - use pre-built models from Microsoft for these
  • Extract known locations from a document layout - extract text or tables from specific locations in a document (including handwriting), achieved by highlighting the target areas when training the model 
So if you have documents like this:


..you could extract the key data and make better use of it (e.g. store as searchable SharePoint metadata or extract into a database to move from unstructured to structured data).  

Enabling technology - Azure Cognitive Services - Vision API (Form Recognizer in this case) 

Conclusion

AI is within reach now, and many of the above scenarios can be achieved without coding or complex implementation work. There needs to be a person or team somewhere who knows how to stitch together Azure AI building blocks and Microsoft 365 of course, but the complexity and cost barriers are melting away. 

Beyond the scenarios I present here, lots of value can be found in use cases which combine some of the above capabilities with other actions. Some examples where you could potentially roll your own solution rather than invest in an expensive platform could be:
  • Analyze call transcripts for sentiment (run a speech-to-text translation and then derive sentiment) and provide a Power BI report
  • Perform image recognition from security cameras and send a push notification or post to a Microsoft Team if something specific is detected
  • Auto-translate the transcript of a CEO speech or townhall event and publish on a regional intranet
The AI element of all of these scenarios and more is now easily achieved with Azure. What a time to be in technology!


Thursday, 30 July 2020

Using Azure DevOps to run agile development sprints

In the last post we talked about some fundamentals around using Azure DevOps for teamwork - in particular, the idea that it doesn't need to be development work that's the focus. As I mentioned last time, here at Content and Code we have all sorts of teams using the Boards capability within Azure DevOps - in addition to our development teams, architect and platform/infrastructure teams also manage their project work and week-to-week tasks in the service.

It's certainly true that DevOps is primarily targeted at developers though, and there are some great features for teams performing agile development using sprints or iterations. We'll dig into tools such as the sprint board, burndown chart and cumulative flow diagram in this article - all hugely valuable tools to help a team measure their work and continuously improve from sprint to sprint, at least once team members are in the swing of using them. 

First though, let's think about motivations.

Why do this? We've survived this long without something like Azure DevOps! 

Many organizations have development teams who are getting by with approaches they've used for a long time. There are a series of inefficiencies and guesswork that are somewhat invisible, and just accepted as "this is as good as it gets". Symptoms of this include:
  • Project managers constantly asking for an update 
  • Regular interruptions from colleagues, even other developers on the team
  • Difficulties interpreting why changes were made
  • Lack of clarity of progress against estimates
  • No understanding of whether team productivity is improving or declining
Azure DevOps isn't the only way to solve these problems, but it does provide a platform to address these challenges and it certainly does a lot of the heavy lifting for you. Once the team are working in a certain way, things flow around the work items in the system - the need for conversations and interruptions to keep people's work in sync is much reduced, because many of the answers are right there in the system. This even extends to trying to avoid pinging a colleague in Teams (or Slack or other chat tool) and instead having more of the conversation in comments of an Azure DevOps work item with @mentions - this dialogue is probably useful later, either to the participants or others. Stand-ups become much more efficient, and the remaining conversation is high value rather than transactional "when will you be done with X?" type questions. 

Step 1 - create a project and choose your process (e.g. agile or scrum)

The first thing you'll need of course, is an Azure DevOps project created using the methodology you prefer. DevOps gives you four main types, but you can customize these to add your own (notice the last two items):

If you're new to these types, I would suggest:
  • Basic is great for non-development or very simple work with minimal categorization of tasks (still in preview at time of writing)
  • Agile and Scrum are the most common 
  • CMMI is appropriate for "high-formality" development only 
The main differences are the workflow states (e.g. New, Active, Resolved, Closed, Removed for Agile, but use of Approved and Committed for Scrum, where the dev team has to "commit" to doing those tasks in this sprint) and whether user stories are used in planning. The Choose a process page in the DevOps documentation is your guide here.

The next step is to enter your work items into the system. Usually you'll break these down into user stories or features, with tasks underneath. I'm going to skip over this bit because it's relatively simple once you've created the first one, but the Add and update a work item page provides guidance. One of the key tenets of agile is to create an overall backlog of work (which is continually evolving), and then allocate this into iterations/sprints as you go along. You use a product backlog for the overall programme or project, and a sprint backlog for each iteration. 

The whole world of backlogs and boards in Azure DevOps is quite deep - see Three classes of backlogs, two types of boards as a good primer, but remember that you don't have to use every feature. 

Step 2 - set team capacity and set-up the sprint

Once you have an overall product backlog, we're ready to start thinking about chunking the work into individual sprints. I show some of the most important next steps in the video below. The first minute of the video in particular shows:
  •  Defining team capacity - telling the system how many hours each team member has available per day
    • This is critical for DevOps' ability to calculate how much work the team is likely to get done in a certain period. If you haven't told it how many hours are available, how could forecasting ever be possible? You can also set amounts per activity type e.g. design, development, deployment, documentation, testing etc.
  • Use forecasting tools to allocate work to the next sprint - the backlog view helps you carve up the work into iterations/sprints by forecasting how much work will fit into the time available. Items are dragged into the sprint to confirm.
    • Imagine a scenario where you have 2 x 20 hour tasks, compared to one where you have 10 x 4 hour tasks. Clearly there's a big difference here in the number of tasks which would get completed - the forecasting tools help you see this, by drawing a line in the task list at the point where the sprint would end - thus helping the team make decisions about what to bring in to a sprint  
    • In the video I show how adjusting the "velocity" parameter (how fast the team are working) changes how much gets done - you see the line changing position in the list of tasks. The velocity number is is something you tweak over time using data from Azure DevOps
From there I show various other features around running a sprint in DevOps - use of the board, updating tasks and so on:


The video actually covers many different elements of working in sprints, not just the initial setup. Here's a summary of what we just saw in the video:


So, hopefully the video and commentary conveys some key principles of using Azure DevOps for development sprints. What about some of the tools we can use?

The burndown chart

More formally known as the "Burndown Trend report" in Azure DevOps, the burndown chart is the classic tool to help manage your sprint. It provides the team with a view of progress against the current sprint plan, indicating whether things are ahead of or behind schedule. Here's an example of a real burndown from one of our projects/sprints with a note on some of the indicators:












The blue area represents work still to be completed, and the grey line indicates linear progress from start to end of the sprint. Comparing the two helps show where you are against plan. You can use either remaining hours or number of work items to represent the work remaining, with the former usually making more sense. 

The sprint above doesn't have non-working days configured, so that would be a nice improvement to show a more accurate picture. 

For more information on the burndown chart, see Configure and monitor sprint burndown.


The Cumulative Flow Diagram

The CFD is a fantastic tool once your team has been working for a period. It helps you understand how work flows the process, in particular:
  • Cycle time - how long an item is active for on average (i.e. from the time someone starts work on the item, to time of completion)
  • Lead time - average end-to-end time for an item (i.e. from time of item creation to time of completion)
I use the slide below as an example of a CFD from one of our projects:

Here's a diagram which helps explain some of the various elements:

 

For more information on the Cumulative Flow Diagram, see Cumulative flow, lead time, and cycle time guidance.

Summary

Azure DevOps can really propel a development team to achieve more. Notably, some of the benefits come from centralizing updates, discussion and questions around the work items in the system - as opposed to having discussions separately and somewhat out of context (e.g. pinging a colleague in Teams). This doesn't mean that all communication happens this way, but a subtle move in this direction can help enormously. 

Aside from this, it's the following prerequisites which unlock the benefits:
  • Ensuring project tasks (work items) are entered into the system, with effort estimated for each one
  • Defining a series of iterations (sprints) to break the project up into time periods
  • Regularly updating the "effort remaining" on tasks
  • Use of the forecasting and analysis tools to continuously improve
Hopefully this has been useful. If you didn't catch the first article on DevOps Boards fundamentals, it can be found at:

Tuesday, 30 June 2020

Using Azure DevOps to manage teamwork

It's been interesting observing the rise of Azure DevOps in the last couple of years at Content and Code, and in particular, seeing the usage expand beyond the development team. We now have several teams which are not dev-focused using the platform to manage their work - including architecture and platform/infrastructure teams whose work is oriented towards envisioning, design, configuration, security or testing. In many cases, team leaders have been looking for a task management tool somewhere within the Microsoft stack with deeper capabilities than Planner or To Do, and without the heavy project management emphasis of Project Online. In some cases their work involves files (e.g. PowerShell scripts or JSON snippets) but often does not. The common thing is that a team's work needs to be managed - and once introduced to Azure DevOps Boards, everyone seems to love the feature set offered by Boards and the clarity it brings to the work. Being able to understand priorities, who is working on what and how various tasks relate to each other are perhaps fundamental to teamwork but are done very well in DevOps Boards. Being able to stay up-to-date on specific items without chasing the owner starts to change the team dynamic in significant ways - it's amazing how many interruptions can be avoided and how efficient a team can become. 

Over the next couple of articles, I want to talk about how Azure DevOps can help in many scenarios - not just for advanced development teams. 

I use the slide below to summarise some of the benefits - this was in a presentation aimed at developers, but it's interesting how only a couple of the points are specific to dev work:

By the way, the slide shows a screenshot of the "Work Details" pane in Azure DevOps. This alone is hugely powerful, providing a view of the volume of work assigned to each team members and how this relates to pre-defined limits. The limits come into play by defining the capacity that each person has available in a particular period, meaning that vacations, work on other projects and anything else that takes away from time available to this work can be accounted for - thus keeping forecasts and time management accurate:
In addition to the examples in my company of architecture and platform/infrastructure teams, I think there's even a case for small support teams outside of a full ITSM process to use Azure DevOps over other tools - more on that later. Overall, I see a scale of different levels of Azure DevOps usage where only development teams running agile sprints are likely to move to level 2 or 3, but level 1 can work for so many different teams:

To unpack that a little, here's an overview of what I mean by those different usage levels:


Some of that might not mean too much at the moment, but we'll explore the different areas further in these articles.

A closer look at Azure DevOps Boards

If you're not familiar with the Boards capability, part of the premise is that the team can see a visual board of a task backlog where an item can simply be dragged to another column to update the status, similar to Post It notes on a physical board (Kanban). This is similar to Trello, Planner or some other task management tools of course. Personally, I love the following features:
  • Being able to define your own columns - often these will map to an individual item status such as e.g. New, Approved, In Progress etc. but you can also map multiple states to a column such as "Build and Test"
  • Being able to define swimlanes to go across the board horizontally - to help categorise items regardless of their state
  • Styling rules - some examples I like to implement include:
    • Changing the card color to red for blocked or overdue items
    • Highlighting the tag displayed on the card for certain tags - the image above shows any item tagged with "Data" in a yellow highlight so the team can identify those tasks quickly for example
  • Live updates - meaning that another team members changes are immediately reflected without a page refresh
Here's an example of a board showing some of those features:


Setting up an Azure DevOps project

In the video below, I demonstrate starting from nothing - creating a new DevOps project, and then adding some issues and tasks to assign to team members. I think it gives a good feel for the user experience of working with items and using some of the collaborative features:


Quite a lot of the basic features are shown in this video - defining tasks, tagging, using drag and drop to update an item's status on the board, using collaborative features such as @mentions and "following" to receive update when an item is changed, and more. The elements shown can be summarized into this list, broken down into three areas:


Summary

Getting started with Azure DevOps is quick and easy. Licensing is generally considered a good deal for what you get, with the first 5 users free and then $6 per user per month after that. In this post we explored some of the capabilities to support modern teamwork - and my experience is that they really can work amazingly well for a variety of teams. Whilst Azure DevOps has many benefits for developers, we've certainly seen it work well in other contexts too at Content and Code. 

Coming back to the different levels of usage, this article and video focuses on the first level ("Simple"):   

In the next article, I'll cover some of the agile features which help a team run sprints, including use of tools such as the burndown chart Cumulative Flow Diagram - tools which help a team understand their progress through a series of tasks and overall velocity over time: 

Monday, 18 May 2020

Teams and SharePoint provisioning solutions - new Group.Create permission

It's common for organizations using Office 365 to provide a custom experience for requesting and creating Microsoft Teams and/or SharePoint sites - often giving the ability to select from pre-defined templates, provide metadata and important settings, check for a possible duplicate, implement an approval process and so on. For many, this is key part of governance and provides a more controlled approach than "allow everyone to create and perform some clean-up later". We've been building solution like this for years, and as one example, the workspace request and provisioning capability in our Fresh digital workplace solution has increased in richness and control as Teams has come to prominence and APIs have gotten better.

You want WHAT permission in our tenant??
Until recently, there was a big problem with these solutions. Since both Microsoft Teams and SharePoint team sites are underpinned by Office 365 Groups (though not SharePoint communication sites of course), creating either involves creation of an Office 365 Group underneath. Whether a logged-in user or application or tool is performing this step, a very high permission is needed - Group.ReadWrite.All. This is essentially allowing the app to read and write to any Microsoft Team, SharePoint site or Office 365 Group within your entire tenant - think about that for a moment!

As you might expect, many security-conscious organizations were reluctant to grant this permission to something running in their tenant. Practically every business on the planet using Office 365 to it's full extent has Teams and SharePoint sites which contain sensitive information, and the whole idea of a single application being able to change or delete data across the entire estate is very far away from the principle of least privilege. If an attacker or some malicious code code was to impersonate this identity, the impact to the business could be huge. 

With one global organization, I spent around 8 weeks convincing the Digital Security (Info Sec) group and other stakeholders that the other mitigations our team and other internal teams were taking were enough to offset this risk. No less than 7 distinct sign-offs were required.

So, something had to change. I knew others would be feeling this pain and I'd been speaking to folks at Microsoft like Jeremy Thake about it for some time. I finally got round to raising a request on UserVoice, but also mentioned it on Twitter. I was pleased to see this response from Bill Bliss, Microsoft's Platform Architect for Teams:


So, that gave me hope. Fast forward a few months, and the good news now of course is that a solution landed some time ago - making the lives of anyone building on the Office 365 that bit easier.

The new Group.Create permission in the Microsoft Graph

Solutions which create Microsoft Teams or Group-connected SharePoint sites can now use a new permission scope in the Graph authorization model named Group.Create. This arrived without much noise or fanfare, and I suspect many people who this is relevant to may still not be aware. 

As you might expect, this allows the user or application to create new Teams, Groups and sites, without having access to all existing instances. As the documentation says:

Allows the calling app to create groups without a signed-in user. Does not allow read, update, or deletion of any groups.

 This is hugely important for any organization using Office 365 who cares about their security posture and wants to make use of provisioning solutions. Here's what it looks like in AAD:


Hoorah! So now we have something that fits both requirements:
  • Allows new Teams and sites to be created
  • Minimises the surface area so that other actions cannot be taken

Call to action - is this relevant to your environment?

If you're using any kind of tailored approach to creating Teams and SharePoint sites, whether it's solution developed in-house or by a partner, a 3rd party product or one of starter solutions found on Github, you should ensure the permission set used is Group.Create. If you find there's a change to be made, this should be carefully managed with testing in your non-production tenants before being going ahead with the switch in live of course - but the process shouldn't be too complex.  

I previously mentioned this new option on Twitter a while ago, and was quite a few people responded in some way:


However, I wanted to also post about this here too. The pace of change in Office 365 and underlying platform elements such as the Microsoft Graph is relentless, and it's easy to miss things. And whilst there will always be imperfections, it's good to know that things like this are being smoothed out.

Hopefully this is useful to some folks!


Wednesday, 22 April 2020

Infuse AI into your Office 365 apps – three approaches and pricing: Part 3

This is the 3rd article in my series on AI in Office 365 apps. Obviously the world has turned upside down since the last post a few weeks ago - firstly I hope you and the people you care about are well and you are coping. We're learning more about new world each day, and it's clear some things might not be quite the same again. I think it's fair to say we'll see greater use of the technologies discussed in this series - the Power Platform, Azure and AI. Many organizations now need to transform very rapidly - and that means new forms of I.T., new processes, and in many cases more automation. You've probably seen Power Apps such as the Crisis Communications app and Emergency Response app emerge from Microsoft, and the fact that these apps were produced so quickly highlights how effective these technologies can be. Being able to plug AI into such apps provides a strong foundation for a wide range of scenarios, as you've hopefully seen in the "Incident Reporting" sample app I built for this series.

As a reminder, the articles in this group are:

  1. AI in Office 365 apps - a scenario, some AI examples and a sample Power App 
  2. AI in Office 365 apps - choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate 
  3. AI in Office 365 apps - pricing and conclusions (this article)
As you might expect, pricing is another area where things get interesting as we compare the three approaches. We looked at capabilities last time (and some factors which can guide you to which form of AI is right for a given situation) - but pricing surely has to be part of the overall decision making process, so let's dive into that now.

In writing this article I compiled costs for the three approaches based on the following parameters for the Incident Reporting app used in the previous articles:
  • 200 users
  • Varying transactions (incidents being reported with an image to tag/describe) - 1000, 100,000, 200,000 and 1 million
  • Calculate app running cost per month

I ended up with a spreadsheet that looks like this - but the costs on their own don't mean much without commentary so I've blurred out the numbers for now, but we'll come back to this later. I'm just providing this here so you can understand the process I went through:

 
So let's dive into the different approaches, starting with Power Apps AI Builder.

Power Apps AI Builder - pricing

Use of AI Builder in Power Apps is enabled by purchasing "AI Builder service credits". Prices can be obtained on the Power Apps pricing page, and the idea is that these are purchased in blocks of 1 million service credits. Prices at the time of writing are $500/£377/€421 per block per month. So the immediate question of course is "how many service credits will I consume?". In the case of the scenario discussed in this series, the specific question would be "how many service credits will my use of Object Detection consume?", since that's the particular type of AI Builder functionality which is appropriate.

Microsoft provide the Power Apps AI Builder pricing calculator to help with this kind of cost forecasting. There, you can enter the number of transactions you'd like to price for - so I entered my range of numbers (1000, 100,000, 200,000 and 1 million) and assumed my model would be trained once per month:


This gives me an estimate of how many AI Builder add-on units are required for that many transactions, and the cost in USD:


So, we get a monthly cost of $500 for 1000 transactions - and we can use the calculator to see costs for varying usage levels The caveat is that they are only estimates rather than a guaranteed price, but they should be reasonably accurate.

However, the big thing to also consider for this approach is that we also need Power Apps per user/per app licensing to cover our 200 users. As the Power Platform licensing FAQ states, "AI Builder is licensed as an add-on to standalone Power Apps and Power Automate licensing as well as Dynamics 365 licenses." What that means of course, is that if you're using Power Apps through Office 365 licensing, you cannot use AI Builder - Power Apps licensing itself is required too. As we'll see, this makes a HUGE difference to the pricing - something that won't be a surprise to anyone who has previously looked at Power Platform licensing.

So, pricing for this approach ends up like this - these numbers are PER MONTH for our varying amounts of usage:



So, things feel expensive when using Power Apps AI Builder - but the costs are mainly coming from the core Power Apps licensing requirements for our 200 users, rather than the AI Builder costs.  

Azure Cognitive Services - pricing

So, does sidestepping Power Apps AI Builder and plugging directly into AI services help in any way? Well, as you might expect, the Computer Vision API in Azure (which again, is the particular AI service relevant to the needs of our scenario) is priced by number of API transactions i.e. it is a consumption-based Azure service. There are tiers of 0-1 million, 1-5M, 5M-10M, 10M-100M and so on, with pricing getting cheaper the more you use. Pricing will vary per region, and there's a different pricing rate for each capability.

For our scenario, we would use the following capabilities of the Computer Vision API:
  • Tag
  • Detect
  • Describe
Pricing looks like this:

ItemPricing (current, UK, lowest tiers)1000100k200k1m
Tag0-1M transactions — £0.746 per 1,000 transactions
1M-5M transactions — £0.597 per 1,000 transactions
£0.75£74.60£149.20£597.00
Detect0-1M transactions — £1.118 per 1,000 transactions
1M-5M transactions — £0.746 per 1,000 transactions
£1.12£111.80£223.60£746.00
Describe£1.864 per 1,000 transactions£1.86£186.40£372.80£1864.00
TOTAL£3.73£372.80£745.60£3207.00

So if our app had around 1000 incidents reported per month, we'd be looking at a total of £3.73 per month. 2000 incidents would be £7.46, and so on.
As you can see, this is pretty cheap unless you're running at scale. However, the gotcha is that we would *still* need Power Platform per app/per user licenses to tap into this though - this is because we need to use the HTTP connector to call to Azure, which means that we are in "pay for" Power Platform territory. Assuming we would use the per user licensing, current UK pricing is £7.50 per user per month. 

The impact of Power Apps licensing for HTTP calls

The issue here for our incident reporting app and any similar scenario, is that the Power App can be used by many users. Since our app needs to call into Azure via the HTTP connector, *each* user needs a per app/per user license. Even just 100 users would mean £750 per month, and that's before the Azure costs or costs for any other aspect. So, just when you think you have an extremely cost-effective way of building AI into your Office 365 application, if you're using a Power App as the front-end, you'll find that the Power Platform licensing can get in the way if you envision large numbers of users.

So, pricing for this approach ends up like this - again, PER MONTH and for our different usage levels:


Again, it's the need for Power Apps licensing that makes this expensive, rather than the costs for Azure AI services.

Power Automate (Flow) for AI - pricing

The final option we looked at was using Power Automate to plug into AI. As we detailed last time, this is essentially a wrapper around Azure's Vision API anyway - making the services easily available to a Flow you might create. Is pricing any better here? Well, yes - depending on how your Flow is called. If your Flow is triggered by an item being added to SharePoint, rather than being directly triggered by the user - which is the case in my Incident Reporting app - then only a single user license is required for Flow. Clearly this is very different to requiring each user of the Power App to have a license - the difference is substantial if you have a large number of users, and this is completely valid according to Microsoft's licensing guidelines.

So, pricing for this final approach ends up like this - again, PER MONTH and for our different usage levels:



As you can see, the costs are dramatically different! Removing the need for Power Apps per user/per app licensing for our fictional 200 users is by far the biggest factor, but it also helps that our AI costs are massively cheaper with Azure Cognitive Services compared to Power Apps AI Builder. It seems the conclusion here is that if your solution can be built in certain ways, then you can save substantially on license costs. Some of these aspects aren't specific to the use of AI in our scenario, but are common considerations:
  • Ensuring AI is used via the Azure Computer Vision actions in Power Automate as an alternative approach to Power Apps AI Builder 
  • Ensure your Power Automate Flow is triggered "indirectly" 
    • In other words, the trigger used is either "When an item is created" or "When an item is created or modified" provided by the SharePoint connector. This way the metadata is added to the file in SharePoint after it lands in the document library, and the process isn't triggered by the user directly (e.g. from a button in the Power App)
Of course, these choices only stack up if you or your team are happy to build in this way. If, for example, you don't have skills in Power Automate, this specific approach may not be an option for you.

Summary

There are numerous approaches to AI in Office 365, and they come with a big variance in operational costs. The decisions your implementation team or partner make during the design of the solution will have a significant impact on the TCO of your application! Understanding the techniques to use which can reduce costs but not contravene any Microsoft licensing guidelines is the key to success here - so having the right expertise is vital.

Two principles which can help overall are:
  • Store data in Office 365 where possible - avoiding data sources which require a premium connector (e.g. Azure, SQL, CDS, other SaaS applications) or custom connector means you stay within the bounds of "Power Platform use rights for Office 365" described in the licensing guidelines
  • Use Office 365 triggers for Power Automate - avoiding implementations where the Flow is triggered directly by the user, but instead uses an Office 365 trigger (e.g. SharePoint's "When an item is created", or OneDrive's "When a file is created") will again keep you within the bounds of Office 365 licensing
None of this will be a surprise to people who know Power Platform licensing well, but it's interesting to see a quantified comparison of costs for a detailed scenario. Of course, in many cases storing data in Office 365 will not be the right solution design - perhaps you need true database features such as relationships (one-to-many or many-to-many), cascading integrity, or business rules/triggers related to your data. Maybe you need better support for role-based access. In cases like these, choosing CDS or SQL is likely to provide a better platform and stakeholders need to accept that costs come along with this - but that should be reflected in the value provided by the application.

In any case, these decisions should be made mindfully. Hopefully the information in this article series has been useful!

Tuesday, 3 March 2020

AI in Office 365 apps – choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate: Part 2

In the last article we talked about some examples of using AI in Office 365, and looked in detail at the the idea of building an incident reporting app which combines common Office 365 building blocks with AI. Whilst Power Apps and SharePoint underpin our solution, we use AI to triage the incident by understanding what is happening in the image. Is this a serious emergency? Are there casualties or emergency services involved? Our image processing AI can interpret the picture, and add tags and a description to where the file is stored in Office 365 - this can drive some automated actions to be taken, such as alerting particular teams or having a human review the incident. We also looked at the results of feeding in a series of images to the AI to analyze the results of different scenarios.

Overall, this article is part one of a series:

  1. AI in Office 365 apps - a scenario, some AI examples and a sample Power App 
  2. AI in Office 365 apps - choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate (this article)
  3. AI in Office 365 apps - pricing and conclusions
In this article, we'll focus on the how - in particular, comparing three approaches we could use in Office 365 to build our incident reporting app and make use of AI. Which is easier? What skills are required? How long might we expect it to take for each approach?

Choosing between Power Apps AI Builder, Azure Cognitive Services and Power Automate

As mentioned in the last article, there are a number of ways we could build this app:
  • Use of Power Apps AI Builder
  • A Power App which talks directly to Azure Cognitive Services (via a Custom Connector)
  • A Power App which uses a Power Automate Flow to consume AI services
For each approach, we’re looking at how the app would be built, pricing, and any constraints and considerations which come with this option.

Option 1 - Power Apps AI Builder

AI Builder is still in preview at the time of writing (February 2020 - release will be April 2020) with four models offered:

As you might expect, the idea is to make AI relatively easy to use and to focus on common scenarios. No coding is required, and anyone with Power Apps experience can now take advantage of what would previously have been highly-complex application capabilities.
How the app would be built
In our scenario, it’s the “Object Detection” model that is relevant. This can detect specific objects in images that are supplied to it, as well as count the number of times the recognized object is found in the image. The first step is to define a model and supply some sample images:



You'll need an entity pre-defined in CDS to use AI Builder - I'm skipping through a few screens here, but ultimately you select a CDS entity representing the object you are detecting:


In my case, I was building an app related to transport and my entity is "Bus". The next step is to start to train the AI model by providing some images containing the entity:



We then tag the object in each image with the entity:



Once all this is done, you can use the Object Detector control in your Power App and configure it to use this model:



Since the whole topic of how to build apps with AI Builder is interesting, I'll most likely go through this process in more detail in a future article - but hopefully you get a feel for what the process looks like.

In the case of our scenario, we said that we wanted the images to be tagged in SharePoint - and here's where we run into a consideration with AI Builder:

Power Apps AI Builder - great for SOME forms of image detection

The Object Detector capability allows us to detect whether a certain object is in an image or not, and how many times. However, our scenario demanded the capability to recognize *what* was happening in an image, not simply whether a predefined object is present or not! And that's what AI Builder provides - a percentage certainty of whether your single object is present or not. This is much less flexible other forms of AI image processing, and we'd need to somehow supplement this to achieve the goals of our application. After all, we can't provide an AI model with every known object in the universe..

Option 2 - Azure Cognitive Services

Another way of bringing AI into an app is to plug directly into Azure Cognitive Services. As you might expect, this as a developer-centric approach which is more low-level - we're not in the Power Platform or other low-code framework here. The big advantage is that there's a wider array of capabilities to use. Compared to the other approaches discussed here, we're not restricted to whatever Microsoft have integrated into the Power Platform. The high-level areas of Cognitive Services currently extend to:
  • Decision - detect anomalies, do content moderation etc.
  • Language - services such as LUIS, text analytics (e.g. sentiment analysis, extract key phrases and entities), translation between 60+ languages
  • Speech - convert between text and speech (both directions), real-time speech translation from audio, speaker recognition etc.
  • Vision - Computer Vision (e.g. tag and describe images, recognize objects, celebrities, landmarks, brands, perform OCR, generate thumbnails etc.), form data extraction, ink/handwriting processing, video indexing, face recognition and more
    • NOTE - this is the service that's relevant to the scenario in this article, in particular the Computer Vision API's ability to tag and describe images) 
  • Web search - Bing autosuggest, Bin entity/image/news/visual/video search and more 
In terms of what this looks like for our scenario, let's take the following image:


How the app would be built
If I can write some code to consume the Computer Vision API and send the above image to it, I get a response that looks like this (notice the tags such as "person", "indoor", "ceiling", "event", "crowd" and so on:

The code I used to do this is:

Code sample:

The relationship between Azure Cognitive Services and other options

Whilst we're talking about Cognitive Services, it's worth recognising of course that all of the options listed in this e-mail use these services underneath. Power Apps AI Builder, the Power Automate activities discussed here, and many other facilities in Microsoft cloud technologies all use Azure Cognitive Services underneath. When you're thinking about technology options, it's worth considering that the more direct your approach to Azure is, the cheaper it is likely to be.

Option 3 - Using Power Automate (Flow) to consume AI

The final option presented here is to create a Flow which will do the work of tagging and describing the incident report images. This is by far the easiest way I think, and is perhaps an overlooked approach for building AI into your apps - I recommend it highly, Power Automate is your friend. Note, however, that these are premium Flow actions - we'll cover licensing and pricing more in the next post, but for now understand that bringing AI capabilities this way does incur additional cost (as it does with the other two approaches).

In the scenario of our Power App for incident reporting, the simplest implementation is probably this:
  1. Power App uploads image to SharePoint document library
  2. Flow runs using the "SharePoint - when a file is created in a folder" trigger
  3. The Flow calls Azure Cognitive Services (using the native Flow actions for this)
  4. Once the tags and image descriptions have been obtained, they are written back to the file in SharePoint as metadata
The beauty is really in step 3. Since Microsoft provide hooks into many AI services as Flow actions, infusing AI into your app this way is extremely simple - no code is required, and it's as simple as lining up some actions in the Flow. Some skills and experience in Power Automate are certainly required, but the bar is certainly much lower than the other options. 

Here's what my end-to-end Flow looks like:

In more specific terms, the trigger is on a new file being created in the SharePoint site and list where my Power App pushes the image to:

For each file that has been added, I get the file and then call:
  • Describe Image Content 
  • Tag Image
The Flow section below shows how we get the details of the file added to SharePoint to be able to pass the contents to the Azure actions for processing:
On the Power App side of course, I need something to use the camera on the device to take the photo and upload the file to SharePoint - but that's not too complex, and it's just a question of adding the Power Apps camera control to my app to facilitate part of that. Of course, a major capability of Power Apps is being able to plug into device facilities such as camera, GPS and display functions, so it should be no surprise that's simple. If you remember, I showed my sample app briefly in the last post:



However, I do need to do some work to get my image into SharePoint once it has been captured - in my case I use the integration between Power Apps and Power Automate to do this. I create a Flow which uses the Power Apps trigger and ultimately uses the SharePoint "Create File" action. The important part though, is the Compose action in the middle which uses the "DataUriToBinary" function to translate the image data from how Power Apps captures it to how SharePoint needs it:

I then link the Flow to the Power App:

I can then use this in my formulas, as:

UpdateContext( { PictureFilename: "Incident_" & Text( Now(), "[$-en-US]yyyy-mm-dd-hh-mm-ss" ) & ".jpg" } );

IncidentPhotoProcessing.Run(PictureFilename, First(Photos).Url);

..and there we go, a fairly quick and easy way to get the photo for my incident into SharePoint so that the AI can do it's processing.

Summary


We've looked at three possible approaches in this post to building an Office 365 application which uses AI - Power Apps AI Builder, use of Azure Cognitive Services from code and use of actions in Power Automate which relate to AI. The findings can be summarized as:
  • Different skills are needed for each approach:-
    • Power Automate is the simplest to use because it provides actions which plug into AI easily - just build a Flow which can receive the image, and then use the Computer Vision actions shown above
    • Direct use of Azure Cognitive Services APIs requires coding skills (either use provided SDKs for .NET and JavaScript etc. or make your own REST requests to the Azure endpoints), but is a powerful approach since the full set of Microsoft AI capabilities are exposed
  • Capabilities are different across the options:-
    • Power Apps AI Builder has some constraints regarding our image processing scenario. The "object detection" model is great for identifying if a known object is present in the image, but can't help with identifying any arbitrary objects or concepts in the image
    •  Azure Cognitive Services underpins all of the AI capabilities in Office 365, and offers many services not exposed in Power Apps or Power Automate. As a result, it offers the most flexibility and power, at the cost of more effort and different skills to implement
  • Requirements and context are important:-
    • In our scenario we're talking about a Power App which captures incident data and stores it in SharePoint - in other words, we've already specified that the front-end should be Power Apps. In this context, integrating Azure Cognitive Services directly would be a bit more challenging than the other two approaches (but is relatively simple from a coded application). In Power Apps, we'd need a custom connector to bring the code in (probably in the form of an Azure Function), and that's certainly more complex than staying purely in the Power Platform 
    • In the design process, other requirements could lead to one technical approach being much more appropriate than another. As another example, if the app needed a rich user interface that was difficult to build in Power Apps, the front-end may well be custom code. At this point, there's an argument for saying that using code for the content services back-end of the application also makes sense too
As usual, having a team or partner who understands this landscape well and has capability across all options can lead to the best result. In this case, all options can be on the table rather than being limited to one or two because of skills. The architecture decision can be merit-based, considering the use cases and scenarios for the app, the AI capabilities needed, the user experience, the range of devices used, speed to implement and cost.

And cost, of course, is the element that we haven't covered yet! Let's discuss that in the next post: