As I opened with in the first article in this series, AI agents are the next generation of solution we'll build for the next few years - this is how work gets automated, at least some of it. Business systems which take away some of the human processing so we can focus on more impactful work, powered by highly-capable LLMs with the ability to consume data, reason and plan, and use tools like web browsers. In this series we focus on agent-building techniques, and in particular what it takes to build autonomous agents successfully i.e. that ability for an agent to dynamically reason and plan. Guidance here orients around Microsoft Copilot Studio agents, though it's interesting to reflect that many of the techniques will apply across nearly all AI agent platforms.
In this article, we focus on descriptions - in the context of AI agents, this means for data, tools, sub-processes and other things the agent might use to get to it's goal. It's interesting because descriptions have been pretty innocuous in apps/solutions/automations we've built over the last few decades, because much like comments in code, only humans read them. However, things are very different when AI is reading them and making decisions about how to process based on the words provided.
But first, here's a recap of the full series:
Articles in this series
- Techniques for autonomous agents in Copilot Studio - intro
- Scenario video - Microsoft architect with proposal generation
- Technique 1 - Getting AI-suitable descriptions right - data, tools, agents themselves (this article)
- Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate
- Technique 3 - Provide tools like Agent Flows for steps the agent can’t easily handle
- Technique 4 - Leveraging Power Platform and Microsoft 365 capabilities in your agents
- Technique 5 - Understand cost, capability, and governance implications of agents you create
Descriptions - why they're suddenly vital
- Agents have some autonomy in terms of getting to an outcome (rather than being defined step-by-step, like a coded application)
- Agents can use data and knowledge sources
- Agents can call on tools and sub-processes
- Agents can call other agents (if they understand what they do)
Bad example (this is the default description generated by Copilot Studio):
“This knowledge source searches information contained in Microsoft license coverage by client.xlsx.”
Good example:
Instead, a far better description helps the agent understand what's in the data/knowledge - for example:
“This knowledge source details the technologies, plans, and current Microsoft licensing in place for key Advania clients. It covers aspects like how many Microsoft 365 E3 and E5 licenses are held, and which technologies are used for Security & Compliance, messaging, endpoint protection, and document management.”
The agent can now understand exactly what's in this data and how it can be used. This context is essential for establishing how it should go about the process and what to use when.
Agent behaviour with poor knowledge descriptions
Agent behaviour with good knowledge descriptions

Better - but still not right
- Unfortunately, the agent isn't truly considering the passed client requirement (as per the demo video shown in the previous article, I'm asking it to consider a particular client's need to replace endpoint protection along with some granular requirements). Instead, I'm still getting some 'lightweight consideration' and generic Microsoft 365 product info at best
- The agent is also not delivering on another element - I'm not getting a draft proposal document generated, although I'm asking for that in the agent instructions
On to the next challenge - getting agent instructions right
Next article
Technique 2 - Define explicit steps in agent instructions when "reasoning the process" isn't appropriate - coming soon
No comments:
Post a Comment