Skip to content

Multi-Agents using Workflows This example is using three agents to generate a blog post: a researcher that retrieves content via a RAG pipeline, a writer that specializes in writing blog posts and a reviewer that is reviewing the blog post.

Notifications You must be signed in to change notification settings

mastermindML/multi-agents-workflow

This branch is up to date with llamagi/multi-agents-workflow:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

6d72f65 Â· Sep 4, 2024

History

30 Commits
Aug 22, 2024
Sep 2, 2024
Sep 4, 2024
Aug 22, 2024
Aug 26, 2024
Sep 4, 2024
Aug 26, 2024
Aug 22, 2024
Sep 4, 2024
Sep 4, 2024
Sep 4, 2024
Sep 4, 2024

Repository files navigation

This is a LlamaIndex multi-agents project using Workflows.

Overview

This example is using three agents to generate a blog post:

  • a researcher that retrieves content via a RAG pipeline,
  • a writer that specializes in writing blog posts and
  • a reviewer that is reviewing the blog post.

There are three different methods how the agents can interact to reach their goal:

  1. Choreography - the agents decide themselves to delegate a task to another agent
  2. Orchestator - a central orchestrator decides which agent should execute a task
  3. Explicit Workflow - a pre-defined workflow specific for the task is used to execute the tasks

Getting Started

First, setup the environment with poetry:

Note: This step is not needed if you are using the dev-container.

poetry install

Then check the parameters that have been pre-configured in the .env file in this directory. (E.g. you might need to configure an OPENAI_API_KEY if you're using OpenAI as model provider).

Second, generate the embeddings of the documents in the ./data directory:

poetry run generate

Third, run the agents in one command:

poetry run python main.py

Per default, the example is using the explicit workflow. You can change the example by setting the EXAMPLE_TYPE environment variable to choreography or orchestrator.

To add an API endpoint, set the FAST_API environment variable to true.

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!

About

Multi-Agents using Workflows This example is using three agents to generate a blog post: a researcher that retrieves content via a RAG pipeline, a writer that specializes in writing blog posts and a reviewer that is reviewing the blog post.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Dockerfile 0.9%