Tech Hub

@ Solution Architecture Works

MS-4010 – Extend Microsoft 365 Copilot with declarative agents using Visual Studio Code

How declarative agents work

Estimated reading: 4 minutes 11 views

Now that we know the basics of a declarative agent, let’s see how it works behind the scenes. You will learn all the components of declarative agents and see how they come together to create an agent. This knowledge will help you decide whether a declarative agent is suitable for your needs.

Custom Knowledge

Declarative agents use custom knowledge to provide additional data and context to Microsoft 365 Copilot, targeted to a specific scenario or task.

Custom knowledge consists of two parts:

  • Custom instructions: define how the agent should behave and how it should formulate its responses.
  • Custom grounding: defines the data sources the agent can use in its responses.

What are custom instructions?

Instructions are specific guidelines or recommendations given to the base model to guide its responses. They can include:

  • Task definitions: specify what the model should do, such as answering questions, summarizing text, or generating creative content.
  • Behavioral guidelines: define the tone, style, and level of detail of responses to match user expectations.
  • Content restrictions: indicate what the model should avoid, such as sensitive topics or copyrighted content.
  • Formatting rules: show how to structure the output, for example using bullet points or specific styles.

Example: In our IT support scenario, our agent receives the following instructions:

You are an intelligent IT support assistant designed to answer common employee questions at Contoso Electronics and manage support tickets. You can use the Tickets action and documents from the SharePoint Online helpdesk site as information sources. When you cannot find the necessary information, prioritize documents from the helpdesk site over your training knowledge and ensure your responses are not specific to Contoso Electronics. Always include a cited source in your responses. Your answers should be concise and suitable for a non-technical audience.

What is custom grounding?

Grounding is the process of connecting large language models (LLMs) to real-world information, enabling more accurate and relevant responses. Grounding data provides context and support to the LLM when generating answers. This reduces the LLM’s reliance on its training data alone and improves response quality.

By default, a declarative agent is not connected to any data source. You configure a declarative agent with one or more Microsoft 365 data sources:

  • Documents stored in OneDrive
  • Documents stored in SharePoint Online
  • Content integrated into Microsoft 365 via a Copilot connector

Additionally, a declarative agent can be configured to use web search results from Bing.com.

Example: In our IT support scenario, a SharePoint Online document library is used as the grounding data source.

When Copilot uses grounding data in a response, the data source is referenced and cited in the answer.

Custom Actions

Custom actions allow declarative agents to interact in real time with external systems. You create custom actions and integrate them into the declarative agent to read and update data in external systems using APIs.

Example: In our IT support scenario, a custom action is used to read and write data in the ticket management system via an API.

How does a declarative agent use custom knowledge and custom actions to answer questions?


Let’s see how custom knowledge and custom actions are used together in a declarative agent to solve our IT support problem.

You create a declarative agent with the following configuration:

  • Custom instructions: use instructions to shape responses so they are suitable for non-technical users.
  • Custom grounding data: use grounding data to improve the relevance and accuracy of responses. For example, use information stored in knowledge base articles on a SharePoint Online site.
  • Custom action: use actions to access real-time data from external systems. For example, use an action to interact with data from the ticket management system via its API to manage tickets in natural language.

The following steps describe how Microsoft 365 Copilot processes user prompts and generates a response:

  1. Input: The user submits a prompt.
  2. Pre-checks: Copilot performs responsible AI checks and security measures to ensure the prompt poses no risk.
  3. Reasoning: Copilot creates a plan to respond to the user’s prompt.
  4. Grounding data: Copilot retrieves relevant information from the grounding data.
  5. Actions: Copilot retrieves data from relevant actions.
  6. Instructions: Copilot retrieves the declarative agent’s instructions.
  7. Response: The Copilot orchestrator compiles all the data gathered during the reasoning process and sends it to the LLM to create a final response.
  8. Output: Copilot delivers the response to the user interface and updates the conversation.

Next unit: When to use declarative agents

Share this Doc

How declarative agents work

Or copy link

CONTENTS