Tuesday, 29 July 2025

Techniques for autonomous agents in Copilot Studio - blog series part 1

AI agents are the next generation of solution we'll build for the next few years - it's clear that many "app front-end plus data" business systems will evolve to be more agentic, in the sense that the application itself will automate more of the processing, and interfaces become less about forms and more about instructing the AI in natural language. Autonomous agents are one of the most exciting aspects because software and apps simply didn't have this possibility until now, and I don't think there's much conjecture this is a key 'unlock' in terms of how work gets automated and more efficient and AI starts to have the impact on societies and economies. With advanced LLMs and protocols for bringing agents and systems together, we now have the tools to build agents that can reason, act, and deliver outcomes — not just respond to prompts. But with this power comes complexity, and I see many approaching agents with expectations that don’t align with today's capabilities. 

This blog article is the first in a series which walks through five key techniques for building effective autonomous agents in Copilot Studio. Each article highlights a common pitfall, explains the underlying concept, and offers practical guidance to help you succeed. Whether you're building agents for internal automation, customer-facing scenarios, or domain-specific copilots, these lessons will help you avoid the traps and unlock the full potential of generative orchestration.

Scenario - an autonomous reasoning agent for Microsoft 365 architecture recommendations
Throughout this series I'll reference an agent I built which acts like one of our most experienced Microsoft architecture consultants at Advania - able to understand the full suite of security and productivity capabilities in Microsoft 365 E3 and E5, consider licensing needs and SKU packaging, and make technology recommendations for a given use case based on a deep understanding of the factors. The next article in the series shows a demo video so you can see the agent "thinking through" the scenario, automating the recommendation process through deep reasoning, and drafting a project proposal on a company templated document - a process which can take hours or days for an experienced architect. This is about accelerating that role, improving accuracy, and levelling-up less experienced architects so their thought process and outputs match those of the most experienced. 

Something we'll focus on in this initial article is that agents aren't autonomous by default in Copilot Studio - the agent has to be built with specific settings enabled and certain keywords used in agent instructions. This post covers these fundamentals, because using all right techniques won't get you anywhere if the agent isn't set up to behave autonomously - but we'll also start by explaining what we mean by "autonomy" so you understand where we're heading and what such an agent can do. 

What makes an agent autonomous?

There are lots of definitions of this, but I boil it down to four elements - I used this slide in a conference talk recently (at the 2025 European Power Platform Conference):


Importantly, in Copilot Studio some of this is made possible by "generative orchestration" - this isn't enabled by default, but if you want dynamic behaviour you need to toggle this to on in your agent settings:

Unlike classic orchestration (where you define every topic and response), generative orchestration allows the agent to decide how to use its knowledge, tools, and topics to fulfil a request. It’s powerful — but it also means you need to design your agent carefully to guide that autonomy.

So in Copilot Studio you essentially have two modes, where "classic" is the old mode and generative is the new possibility:

If you look at most other AI agent and virtual assistant platforms which have been around for a while (e.g. ServiceNow, Salesforce, Google), all have evolved from this classic "define each and every step of what a user might ask and how the bot/agent should respond" approach to something like generative orchestration where the LLM is essentially deciding how to behave and have the conversation with the user.  
 
For Copilot Studio, Microsoft has a useful table on the Orchestrate agent behavior with generative AI page which goes into more detail on specific behaviour differences:

 

Enabling reasoning in Copilot Studio agents

In the first image above, we discussed that a key element of autonomy is the "able to reason and plan" aspect. To be able to use reasoning in your agent, this also needs to be enabled in your agent settings (within the 'Generative AI' section) and is only possible if you're using generative orchestration. The settings are bundled together in the same area:




As highlighted in the small text in the blue box, the other critical thing is to use the "reason" keyword specifically in your agent instructions. This tells Copilot Studio to use a reasoning model (e.g. currently OpenAI o1 at the time of writing) rather than a standard LLM, and this won't happen if you describe it in other words.

Here's an example of the reason keyword being used in agent instructions - in this case, an agent I built where this is one of the agent steps to complete: 
5. Using the data from the previous step, reason to provide a recommendation of how a solution could be implemented using the available technologies. As part of this reasoning, establish any licensing uplifts which may be required for this client to use the recommended technologies. IMPORTANT - be extensive with the rationale for your decision, detailing how capabilities in the proposed technology meet specific requirements.
I'll show this agent in action through this article series. As you can imagine, the instruction above tells the agent to use a reasoning model for this step in order to derive the recommendation I'm asking for - that's important, because we're asking for "thinking" rather than more standard LLM processing. 

Agents need help
If not clear already, what we're really saying in this series is that agents need help - the work these days is preparing data, creating tools and sub-processes for agents to call into, and refining natural language descriptions and instructions until an agent behaves in the way you want. This is a new form of coding in some ways, but it doesn't all happen magically - understanding the critical techniques is key.

Articles in this series

  1. Techniques for autonomous agents in Copilot Studio - intro (this article)
  2. Scenario video - Microsoft architect with proposal generation
  3. Technique 1 - Getting AI-suitable descriptions right (data, tools, agents themselves)
  4. Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
  5. Technique 3 - Provide tools like Agent Flows for steps the agent can’t easily handle
  6. Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
  7. Technique 5 - Understand cost, capability, and governance implications of agents you create

Next article

Scenario video - Microsoft architect with proposal generation

No comments: