Tech Hub

@ Solution Architecture Works

Create and publish agents with Microsoft Copilot Studio

Test your agents

Estimated reading: 3 minutes 93 views

Since an agent is composed of multiple topics, it is important to ensure that each topic works correctly and can be used as intended. For example, if you want to verify that your Store opening hours topic is triggered when someone enters text asking for store hours, you can test your agent to make sure it responds correctly.

You can test your agent in real time using the agent test panel, which you can enable by selecting Test in the upper-right corner of the application. To hide the test panel, select the Test button again.

The Test your agent window interacts with your agent’s topics exactly as a user would. When you enter text in the agent test window, the information is presented as it would be for a user. Your agent likely contains multiple topics. When you interact with a specific topic, it can be helpful for the application to take you directly to that topic. You can accomplish this by enabling Track between topics at the top of the test window. This option tracks the agent as it implements the different topics.

For example, typing hello would trigger the Greeting topic, and then the application would open the Greeting topic and display its conversation path in the window. If you type What time is your store open?, the application switches to display the Store opening hours topic. As each topic is displayed, you can observe the progression of the path, which helps you evaluate the performance of your topics.

The following image shows that the message “What time is your store open?” was sent to the agent. Notice that you are automatically directed to the Store opening hours topic. The conversation path is highlighted in red. The agent is now waiting for your response and has provided two suggestion buttons to reply. These suggestion buttons reflect the user options Seattle and Bellevue that were defined during the previous creation of the topic. In the agent test, you can select one of these suggestion buttons to continue.*

When you select an option, you continue along the conversation path until you reach the end. The chat stops when you arrive at the bottom of that branch.

By testing your agents regularly throughout the creation process, you can ensure that the conversation flow works as intended. If the dialogue does not reflect your intent, you can modify it and save it. The most recent content is integrated into the test agent, and you can try it again.

Test generative responses


The agent uses generative responses as a fallback when it cannot identify a topic that provides an acceptable answer.

When testing the generative response feature, you should ask a question relevant to the data sources you have defined for generative AI, but one that cannot be answered by any of your topics. Your agent uses the defined data sources to find the correct answer. Once a response is displayed, you can ask additional follow-up questions. The agent remembers the context, so you do not need to provide extra clarification.

For example, if you asked an agent connected to Microsoft Learn as a data source:
“What is an IF statement used for in Microsoft Excel?”
The agent returns details about the IF function. If you then ask:
“Give me an example.”
The agent understands that you are still talking about Microsoft Excel and provides an example.

Next unit: Publish agents and analyze performance

Share this Doc

Test your agents

Or copy link

CONTENTS