AI agents are the next generation of solution we'll build for the next few years - it's clear that many "app front-end plus data" business systems will evolve to be more agentic, in the sense that the application itself will automate more of the processing, and interfaces become less about forms and more about instructing the AI in natural language. Autonomous agents are one of the most exciting aspects because software and apps simply didn't have this possibility until now, and I don't think there's much conjecture this is a key 'unlock' in terms of how work gets automated and more efficient and AI starts to have the impact on societies and economies. With advanced LLMs and protocols for bringing agents and systems together, we now have the tools to build agents that can reason, act, and deliver outcomes — not just respond to prompts. But with this power comes complexity, and I see many approaching agents with expectations that don’t align with today's capabilities.
This blog article is the first in a series which walks through five key techniques for building effective autonomous agents in Copilot Studio. Each article highlights a common pitfall, explains the underlying concept, and offers practical guidance to help you succeed. Whether you're building agents for internal automation, customer-facing scenarios, or domain-specific copilots, these lessons will help you avoid the traps and unlock the full potential of generative orchestration.
Something we'll focus on in this initial article is that agents aren't autonomous by default in Copilot Studio - the agent has to be built with specific settings enabled and certain keywords used in agent instructions. This post covers these fundamentals, because using all right techniques won't get you anywhere if the agent isn't set up to behave autonomously - but we'll also start by explaining what we mean by "autonomy" so you understand where we're heading and what such an agent can do.
What makes an agent autonomous?
There are lots of definitions of this, but I boil it down to four elements - I used this slide in a conference talk recently (at the 2025 European Power Platform Conference):
Enabling reasoning in Copilot Studio agents
In the first image above, we discussed that a key element of autonomy is the "able to reason and plan" aspect. To be able to use reasoning in your agent, this also needs to be enabled in your agent settings (within the 'Generative AI' section) and is only possible if you're using generative orchestration. The settings are bundled together in the same area:

5. Using the data from the previous step, reason to provide a recommendation of how a solution could be implemented using the available technologies. As part of this reasoning, establish any licensing uplifts which may be required for this client to use the recommended technologies. IMPORTANT - be extensive with the rationale for your decision, detailing how capabilities in the proposed technology meet specific requirements.
Articles in this series
- Techniques for autonomous agents in Copilot Studio - intro (this article)
- Scenario video - Microsoft architect with proposal generation
- Technique 1 - Getting AI-suitable descriptions right (data, tools, agents themselves)
- Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
- Technique 3 - Provide tools like Agent Flows for steps the agent can’t easily handle
- Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
- Technique 5 - Understand cost, capability, and governance implications of agents you create
Next article
Scenario video - Microsoft architect with proposal generation
No comments:
Post a Comment