Tuesday, 29 July 2025

Demo video - Microsoft architect autonomous agent (Copilot Studio)

In the previous article in this series on autonomous agents, we talked about what makes an agent autonomous and some implementation fundamentals specific to Copilot Studio. As with anything AI, seeing an example in context goes a long way to helping understand the possibilities, so this second post provides a video of a real autonomous agent we're starting to use at Advania. The agent effectively becomes a member of our team, using advanced reasoning models to work with complex concepts and accelerate our work. Before that, here's a reminder of what this series looks like:

Articles in this series

  1. Techniques for autonomous agents in Copilot Studio - intro
  2. Scenario video - Microsoft architect with proposal generation (this article)
  3. Technique 1 - Getting AI-suitable descriptions right (for data sources, tools, and agents themselves)
  4. Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
  5. Technique 3 - Provide tools like Agent Flows for steps the agent can’t easily handle
  6. Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
  7. Technique 5 - Understand cost, capability, and governance implications of agents you create

Use case for this agent

If you follow me on LinkedIn you may have seen me post about this agent there. We built this agent to automate some of our work at Advania, in particular some of the complex Microsoft architecture and technology consultancy work we deliver to clients. The scenario is essentially an 'expert Microsoft architect' agent which understands:

➡️ The various technology estates of key Advania clients and what they have licensed - the agent sources this from an internal system we have
➡️ Microsoft 365 product SKUs and licensing, specifically E3/E5 suites and granular capabilities in sub-products like Defender for Endpoint, Defender for Identity etc. - the agent uses the excellent m365maps.com website for this
➡️ How to take a specific client requirement (e.g. a need to roll out a new endpoint protection technology/automate a legal process/reach frontline workers with corporate comms etc.), derive any "strong fit" Microsoft technologies, and map granular requirements specified by the client to product capabilities to support the proposed approach

The video shows:

✅ A quick overview of the agent definition (built in Copilot Studio)
✅ Data sources the agent has access to
✅ The agent reasoning through the supplied use case for one fictional client (Ecosphere Solutions)
✅ Proposed approach with clear rationale - including licensing considerations, implementation details, and how specific requirements are met by the proposed technology
✅ Proposal drafted on company-branded template

Demo video



Reflection on AI agents like this

The power of agents is that they can act as a virtual team member, automating some of the workload and enabling human effort to go to higher order challenges. The interesting thing about this agent in my view is the ability to perform advanced reasoning - thinking through the client's need, the technologies they have access to, exactly what's provided in those, and deriving a good fit if there is one. 

Of course, we don't see the AI agent as replacing Advania architects and consultants much-loved by our clients - this is an accelerant for our teams, not a replacement. But we do see agents like this as helping us deliver more value to clients - bolstering our expertise and helping us respond faster with the accuracy and depth we're known for. It also helps us level-up less experienced consultants or new members to a team. In reality, every business has complex processes and expertise that today's AI agents can unlock - this is an example of what makes sense for us.

Next article

Technique 1 - Getting AI-suitable descriptions right (data, tools, agents themselves) - coming soon

Techniques for autonomous agents in Copilot Studio - blog series part 1

AI agents are the next generation of solution we'll build for the next few years - it's clear that many "app front-end plus data" business systems will evolve to be more agentic, in the sense that the application itself will automate more of the processing, and interfaces become less about forms and more about instructing the AI in natural language. Autonomous agents are one of the most exciting aspects because software and apps simply didn't have this possibility until now, and I don't think there's much conjecture this is a key 'unlock' in terms of how work gets automated and more efficient and AI starts to have the impact on societies and economies. With advanced LLMs and protocols for bringing agents and systems together, we now have the tools to build agents that can reason, act, and deliver outcomes — not just respond to prompts. But with this power comes complexity, and I see many approaching agents with expectations that don’t align with today's capabilities. 

This blog article is the first in a series which walks through five key techniques for building effective autonomous agents in Copilot Studio. Each article highlights a common pitfall, explains the underlying concept, and offers practical guidance to help you succeed. Whether you're building agents for internal automation, customer-facing scenarios, or domain-specific copilots, these lessons will help you avoid the traps and unlock the full potential of generative orchestration.

Scenario - an autonomous reasoning agent for Microsoft 365 architecture recommendations
Throughout this series I'll reference an agent I built which acts like one of our most experienced Microsoft architecture consultants at Advania - able to understand the full suite of security and productivity capabilities in Microsoft 365 E3 and E5, consider licensing needs and SKU packaging, and make technology recommendations for a given use case based on a deep understanding of the factors. The next article in the series shows a demo video so you can see the agent "thinking through" the scenario, automating the recommendation process through deep reasoning, and drafting a project proposal on a company templated document - a process which can take hours or days for an experienced architect. This is about accelerating that role, improving accuracy, and levelling-up less experienced architects so their thought process and outputs match those of the most experienced. 

Something we'll focus on in this initial article is that agents aren't autonomous by default in Copilot Studio - the agent has to be built with specific settings enabled and certain keywords used in agent instructions. This post covers these fundamentals, because using all right techniques won't get you anywhere if the agent isn't set up to behave autonomously - but we'll also start by explaining what we mean by "autonomy" so you understand where we're heading and what such an agent can do. 

What makes an agent autonomous?

There are lots of definitions of this, but I boil it down to four elements - I used this slide in a conference talk recently (at the 2025 European Power Platform Conference):


Importantly, in Copilot Studio some of this is made possible by "generative orchestration" - this isn't enabled by default, but if you want dynamic behaviour you need to toggle this to on in your agent settings:

Unlike classic orchestration (where you define every topic and response), generative orchestration allows the agent to decide how to use its knowledge, tools, and topics to fulfil a request. It’s powerful — but it also means you need to design your agent carefully to guide that autonomy.

So in Copilot Studio you essentially have two modes, where "classic" is the old mode and generative is the new possibility:

If you look at most other AI agent and virtual assistant platforms which have been around for a while (e.g. ServiceNow, Salesforce, Google), all have evolved from this classic "define each and every step of what a user might ask and how the bot/agent should respond" approach to something like generative orchestration where the LLM is essentially deciding how to behave and have the conversation with the user.  
 
For Copilot Studio, Microsoft has a useful table on the Orchestrate agent behavior with generative AI page which goes into more detail on specific behaviour differences:

 

Enabling reasoning in Copilot Studio agents

In the first image above, we discussed that a key element of autonomy is the "able to reason and plan" aspect. To be able to use reasoning in your agent, this also needs to be enabled in your agent settings (within the 'Generative AI' section) and is only possible if you're using generative orchestration. The settings are bundled together in the same area:




As highlighted in the small text in the blue box, the other critical thing is to use the "reason" keyword specifically in your agent instructions. This tells Copilot Studio to use a reasoning model (e.g. currently OpenAI o1 at the time of writing) rather than a standard LLM, and this won't happen if you describe it in other words.

Here's an example of the reason keyword being used in agent instructions - in this case, an agent I built where this is one of the agent steps to complete: 
5. Using the data from the previous step, reason to provide a recommendation of how a solution could be implemented using the available technologies. As part of this reasoning, establish any licensing uplifts which may be required for this client to use the recommended technologies. IMPORTANT - be extensive with the rationale for your decision, detailing how capabilities in the proposed technology meet specific requirements.
I'll show this agent in action through this article series. As you can imagine, the instruction above tells the agent to use a reasoning model for this step in order to derive the recommendation I'm asking for - that's important, because we're asking for "thinking" rather than more standard LLM processing. 

Agents need help
If not clear already, what we're really saying in this series is that agents need help - the work these days is preparing data, creating tools and sub-processes for agents to call into, and refining natural language descriptions and instructions until an agent behaves in the way you want. This is a new form of coding in some ways, but it doesn't all happen magically - understanding the critical techniques is key.

Articles in this series

  1. Techniques for autonomous agents in Copilot Studio - intro (this article)
  2. Scenario video - Microsoft architect with proposal generation
  3. Technique 1 - Getting AI-suitable descriptions right (data, tools, agents themselves)
  4. Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
  5. Technique 3 - Provide tools like Agent Flows for steps the agent can’t easily handle
  6. Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
  7. Technique 5 - Understand cost, capability, and governance implications of agents you create

Next article

Scenario video - Microsoft architect with proposal generation