Skip to content

Commit

Permalink
Merge pull request #46 from videlalvaro/accessibility
Browse files Browse the repository at this point in the history
updates wording based on writing guidelines
  • Loading branch information
leestott authored Feb 18, 2025
2 parents 63e6f98 + 4f008a8 commit 344ad63
Show file tree
Hide file tree
Showing 7 changed files with 21 additions and 21 deletions.
4 changes: 2 additions & 2 deletions 00-course-setup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To begin, please clone or fork the GitHub Repository. This will make your own ve

This can be done by clicking link to [fork the repo](https://github.com/microsoft/ai-agents-for-beginners/fork){target="_blank"}.

You should now have your own forked version of this course like below:
You should now have your own forked version of this course in the following link:

![Forked Repo](./images/forked-repo.png)

Expand All @@ -37,7 +37,7 @@ Copy your new token that you have just created. You will now add this to your `.

## Add this to your Environment Variables

To create your `.env` file run the below command in your terminal:
To create your `.env` file run the following command in your terminal:

```bash
cp .env.example .env
Expand Down
14 changes: 7 additions & 7 deletions 02-explore-agentic-frameworks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ kernelBuilder.Plugins.AddFromType<BookTravelPlugin>("BookTravel");
Kernel kernel = kernelBuilder.Build();

/*
Behind the scenes, i.e recognizes the tool to call, what arguments it already has (location) and what it needs (date)
Behind the scenes, it recognizes the tool to call, what arguments it already has (location) and what it needs (date)
{
"tool_calls": [
Expand All @@ -123,7 +123,7 @@ Console.WriteLine(response);
chatHistory.AddAssistantMessage(response);

// AI Response: "Before I can book your flight, I need to know your departure date. When are you planning to travel?"
// I.e above it figures out the tool to call, what arguments it already has (location) and what it needs (date) from the user input, at this point it ends up asking the user for the missing information
// That is, in the previous code it figures out the tool to call, what arguments it already has (location) and what it needs (date) from the user input, at this point it ends up asking the user for the missing information
```

What you can see from this example is how you can leverage a pre-built parser to extract key information from user input, such as the origin, destination, and date of a flight booking request. This modular approach allows you to focus on the high-level logic.
Expand Down Expand Up @@ -171,7 +171,7 @@ stream = team.run_stream(task="Analyze data", max_turns=10)
await Console(stream)
```

What you see in above code is how you can create a task that involves multiple agents working together to analyze data. Each agent performs a specific function, and the task is executed by coordinating the agents to achieve the desired outcome. By creating dedicated agents with specialized roles, you can improve task efficiency and performance.
What you see in the previous code is how you can create a task that involves multiple agents working together to analyze data. Each agent performs a specific function, and the task is executed by coordinating the agents to achieve the desired outcome. By creating dedicated agents with specialized roles, you can improve task efficiency and performance.

### Learn in Real-Time

Expand Down Expand Up @@ -225,7 +225,7 @@ Here are some important core concepts of AutoGen:
print(f"{self.id.type} responded: {response.chat_message.content}")
```

In above code, `MyAssistant` has been created and inherits from `RoutedAgent`. It has a message handler that prints the content of the message and then sends a response using the `AssistantAgent` delegate. Especially note how we assign to `self._delegate` an instance of `AssistantAgent` which is a pre-built agent that can handle chat completions.
In the previous code, `MyAssistant` has been created and inherits from `RoutedAgent`. It has a message handler that prints the content of the message and then sends a response using the `AssistantAgent` delegate. Especially note how we assign to `self._delegate` an instance of `AssistantAgent` which is a pre-built agent that can handle chat completions.


Let's let AutoGen know about this agent type and kick off the program next:
Expand All @@ -240,7 +240,7 @@ Here are some important core concepts of AutoGen:
await runtime.send_message(MyMessageType("Hello, World!"), AgentId("my_agent", "default"))
```

Above the agents is registered with the runtime and then a message is sent to the agent resulting in the below output:
In the previous code the agents are registered with the runtime and then a message is sent to the agent resulting in the following output:

```text
# Output from the console:
Expand Down Expand Up @@ -290,7 +290,7 @@ Here are some important core concepts of AutoGen:
)
```

Above we have a `GroupChatManager` that is registered with the runtime. This manager is responsible for coordinating the interactions between different types of agents, such as writers, illustrators, editors, and users.
In the previous code we have a `GroupChatManager` that is registered with the runtime. This manager is responsible for coordinating the interactions between different types of agents, such as writers, illustrators, editors, and users.

- **Agent Runtime**. The framework provides a runtime environment, enabling communication between agents, manages their identities and lifecycles, and enforce security and privacy boundaries. This means that you can run your agents in a secure and controlled environment, ensuring that they can interact safely and efficiently. There are two runtimes of interest:
- **Stand-alone runtime**. This is a good choice for single-process applications where all agents are implemented in the same programming language and run in the same process. Here's an illustration of how it works:
Expand Down Expand Up @@ -470,7 +470,7 @@ Azure AI Agent Service has the following core concepts:
print(f"Messages: {messages}")
```

In the above code, a thread is created. Thereafter, a message is sent to the thread. By calling `create_and_process_run`, the agent is asked to perform work on the thread. Finally, the messages are fetched and logged to see the agent's response. The messages indicate the progress of the conversation between the user and the agent. It's also important to understand that the messages can be of different types such as text, image, or file, that is the agents work has resulted in for example an image or a text response for example. As a developer, you can then use this information to further process the response or present it to the user.
In the previous code, a thread is created. Thereafter, a message is sent to the thread. By calling `create_and_process_run`, the agent is asked to perform work on the thread. Finally, the messages are fetched and logged to see the agent's response. The messages indicate the progress of the conversation between the user and the agent. It's also important to understand that the messages can be of different types such as text, image, or file, that is the agents work has resulted in for example an image or a text response for example. As a developer, you can then use this information to further process the response or present it to the user.

- **Integrates with other AI frameworks**. Azure AI Agent service can interact with other frameworks like AutoGen and Semantic Kernel, which means you can build part of your app in one of these frameworks and for example using the Agent service as an orchestrator or you can build everything in the Agent service.

Expand Down
2 changes: 1 addition & 1 deletion 03-agentic-design-patterns/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ These are the key elements in the core of an agent’s design.

## The Guidelines To Implement These Principles

When you’re using the above design principles, use the following guidelines:
When you’re using the previous design principles, use the following guidelines:

1. **Transparency**: Inform the user that AI is involved, how it functions (including past actions), and how to give feedback and modify the system.
2. **Control**: Enable the user to customize, specify preferences and personalize, and have control over the system and its attributes (including the ability to forget).
Expand Down
6 changes: 3 additions & 3 deletions 04-tool-use/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ Let's use the example of getting the current time in a city to illustrate:
1. **Create a Function Schema**:

Next we will define a JSON schema that contains the function name, description of what the function does, and the names and descriptions of the function parameters.
We will then take this schema and pass it to the client created above, along with the users request to find the time in San Francisco. Whats important to note is that a **tool call** is what is returned, **not** the final answer to the question. As mentioned earlier, the LLM returns the name of the function it selected for the task, and the arguments that will be passed to it.
We will then take this schema and pass it to the client created previously, along with the users request to find the time in San Francisco. Whats important to note is that a **tool call** is what is returned, **not** the final answer to the question. As mentioned earlier, the LLM returns the name of the function it selected for the task, and the arguments that will be passed to it.

```python
# Function description for the model to read
Expand Down Expand Up @@ -265,11 +265,11 @@ The Agent Service allows us to be able to use these tools together as a `toolset

Imagine you are a sales agent at a company called Contoso. You want to develop a conversational agent that can answer questions about your sales data.

The image below illustrates how you could use Azure AI Agent Service to analyze your sales data:
The following image illustrates how you could use Azure AI Agent Service to analyze your sales data:

![Agentic Service In Action](./images/agent-service-in-action.jpg?WT.mc_id=academic-105485-koreyst)

To use any of these tools with the service we can create a client and define a tool or toolset. To implement this practically we can use the Python code below. The LLM will be able to look at the toolset and decide whether to use the user created function, `fetch_sales_data_using_sqlite_query`, or the pre-built Code Interpreter depending on the user request.
To use any of these tools with the service we can create a client and define a tool or toolset. To implement this practically we can use the following Python code. The LLM will be able to look at the toolset and decide whether to use the user created function, `fetch_sales_data_using_sqlite_query`, or the pre-built Code Interpreter depending on the user request.

```python
import os
Expand Down
2 changes: 1 addition & 1 deletion 05-agentic-rag/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ As these systems become more autonomous in their reasoning, governance and trans
- **Bias Control and Balanced Retrieval:** Developers can tune retrieval strategies to ensure balanced, representative data sources are considered, and regularly audit outputs to detect bias or skewed patterns using custom models for advanced data science organizations using Azure Machine Learning.
- **Human Oversight and Compliance:** For sensitive tasks, human review remains essential. Agentic RAG doesn’t replace human judgment in high-stakes decisions—it augments it by delivering more thoroughly vetted options.

Having tools that provide a clear record of actions is essential. Without them, debugging a multi-step process can be very difficult. See below example from Literal AI (company behind Chainlit) for an Agent run:
Having tools that provide a clear record of actions is essential. Without them, debugging a multi-step process can be very difficult. See the following example from Literal AI (company behind Chainlit) for an Agent run:

![AgentRunExample](./images/AgentRunExample.png)

Expand Down
8 changes: 4 additions & 4 deletions 07-planning-design/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ This modular approach also allows for incremental enhancements. For instance, yo

Large Language Models (LLMs) can generate structured output (e.g. JSON) that is easier for downstream agents or services to parse and process. This is especially useful in a multi-agent context, where we can action these tasks after the planning output is received. Refer to this <a href="https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/cookbook/structured-output-agent.html" target="_blank">blogpost</a> for a quick overview.

Below is an example Python snippet that demonstrates a simple planning agent decomposing a goal into subtasks and generating a structured plan:
The following Python snippet demonstrates a simple planning agent decomposing a goal into subtasks and generating a structured plan:

### Planning Agent with Multi-Agent Orchestration

Expand All @@ -57,7 +57,7 @@ The planner then:
* Lists Agents and Their Tools: The agent registry holds a list of agents (e.g., for flight, hotel, car rental, and activities) along with the functions or tools they offer.
* Routes the Plan to the Respective Agents: Depending on the number of subtasks, the planner either sends the message directly to a dedicated agent (for single-task scenarios) or coordinates via a group chat manager for multi-agent collaboration.
* Summarizes the Outcome: Finally, the planner summarizes the generated plan for clarity.
Below is the Python code sample illustrating these steps:
The following Python code sample illustrates these steps:

```python

Expand Down Expand Up @@ -132,7 +132,7 @@ if response_content is None:
pprint(json.loads(response_content))
```

Below is the output from the above code and you can then use this structured output to route to `assigned_agent` and summarize the travel plan to the end user.
What follows is the output from the previous code and you can then use this structured output to route to `assigned_agent` and summarize the travel plan to the end user.

```json
{
Expand Down Expand Up @@ -163,7 +163,7 @@ Below is the output from the above code and you can then use this structured out
}
```

An example notebook with the above code sample is available [here](07-autogen.ipynb).
An example notebook with the previous code sample is available [here](07-autogen.ipynb).

### Iterative Planning

Expand Down
6 changes: 3 additions & 3 deletions 08-multi-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Consider a scenario where a customer is trying to get refund for a product, ther

**Agents specific for the refund process**:

Below are some agents that could be involved in the refund process:
Following there are some agents that could be involved in the refund process:

- **Customer agent**: This agent represents the customer and is responsible for initiating the refund process.
- **Seller agent**: This agent represents the seller and is responsible for processing the refund.
Expand All @@ -139,13 +139,13 @@ These agents can be used by other parts of your business.
- **Security agent**: This agent represents the security process and is responsible for ensuring the security of the refund process.
- **Quality agent**: This agent represents the quality process and is responsible for ensuring the quality of the refund process.

There's quite a few agents listed above both for the specific refund process but also for the general agents that can be used in other parts of your business. Hopefully this gives you an idea on how you can decide on which agents to use in your multi agent system.
There's quite a few agents listed previously both for the specific refund process but also for the general agents that can be used in other parts of your business. Hopefully this gives you an idea on how you can decide on which agents to use in your multi agent system.

## Assignment

Design a multi-agent system for a customer support process. Identify the agents involved in the process, their roles and responsibilities, and how they interact with each other. Consider both agents specific to the customer support process and general agents that can be used in other parts of your business.

> Have a think before you read the solution below, you may need more agents than you think.
> Have a think before you read the following solution, you may need more agents than you think.
> TIP: Think about the different stages of the customer support process and also consider agents needed for any system.
Expand Down

0 comments on commit 344ad63

Please sign in to comment.