Skip to content

Commit

Permalink
LLM-related docs & refactoring
Browse files Browse the repository at this point in the history
  • Loading branch information
whimo committed Sep 6, 2024
1 parent b3d3709 commit 5c2a59c
Show file tree
Hide file tree
Showing 10 changed files with 147 additions and 197 deletions.
3 changes: 0 additions & 3 deletions docs/source/another_llm.nblink

This file was deleted.

108 changes: 108 additions & 0 deletions docs/source/choosing_llms.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
Choosing LLMs
====================

Generally, the interaction with an LLM is up to the agent implementation.
However, as motleycrew integrates with several agent frameworks, there is some common ground for how to choose LLMs.


Providing an LLM to an agent
----------------------------

In general, you can pass a specific LLM to the agent you're using.

.. code-block:: python
from motleycrew.agents.langchain import ReActToolCallingMotleyAgent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)
agent = ReActToolCallingMotleyAgent(llm=llm, tools=[...])
The LLM class depends on the agent framework you're using.
That's why we have an ``init_llm`` function to help you set up the LLM.

.. code-block:: python
from motleycrew.common.llms import init_llm
from motleycrew.common import LLMFramework, LLMProvider
llm = init_llm(
llm_framework=LLMFramework.LANGCHAIN,
llm_provider=LLMProvider.ANTHROPIC,
llm_name="claude-3-5-sonnet-20240620",
llm_temperature=0
)
agent = ReActToolCallingMotleyAgent(llm=llm, tools=[...])
The currently supported frameworks (:py:class:`motleycrew.common.enums.LLMFramework`) are:

- :py:class:`Langchain <motleycrew.common.enums.LLMFramework.LANGCHAIN>` for Langchain-based agents from Langchain, CrewAI, motelycrew etc.
- :py:class:`LlamaIndex <motleycrew.common.enums.LLMFramework.LLAMA_INDEX>` for LlamaIndex-based agents.

The currently supported LLM providers (:py:class:`motleycrew.common.enums.LLMProvider`) are:

- :py:class:`OpenAI <motleycrew.common.enums.LLMProvider.OPENAI>`
- :py:class:`Anthropic <motleycrew.common.enums.LLMProvider.ANTHROPIC>`
- :py:class:`Groq <motleycrew.common.enums.LLMProvider.GROQ>`
- :py:class:`Together <motleycrew.common.enums.LLMProvider.TOGETHER>`
- :py:class:`Replicate <motleycrew.common.enums.LLMProvider.REPLICATE>`
- :py:class:`Ollama <motleycrew.common.enums.LLMProvider.OLLAMA>`

Please raise an issue if you need to add support for another LLM provider.


Default LLM
-----------

At present, we default to OpenAI's latest ``gpt-4o`` model for our agents,
and rely on the user to set the `OPENAI_API_KEY` environment variable.

You can control the default LLM as follows:

.. code-block:: python
from motleycrew.common import Defaults
Defaults.DEFAULT_LLM_PROVIDE = "the_new_default_LLM_provider"
Defaults.DEFAULT_LLM_NAME = "name_of_the_new_default_model_from_the_provider"
Using custom LLMs
-----------------

To use a custom LLM provider to use as the default or via the ``init_llm`` function,
you need to make sure that for all the frameworks you're using (currently at most Langchain, LlamaIndex),
the `LLM_MAP` has an entry for the LLM provider, for example as follows:

.. code-block:: python
from motleycrew.common import LLMProvider
from motleycrew.common.llms import LLM_MAP
LLM_MAP[(LLMFramework.LANGCHAIN, "MyLLMProvider")] = my_langchain_llm_factory
LLM_MAP[(LLMFramework.LLAMA_INDEX, "MyLLMProvider")] = my_llamaindex_llm_factory
Here each llm factory is a function with a signature
``def llm_factory(llm_name: str, llm_temperature: float, **kwargs)`` that returns the model object for the relevant framework.

For example, this is the built-in OpenAI model factory for Langchain:

.. code-block:: python
def langchain_openai_llm(
llm_name: str = Defaults.DEFAULT_LLM_NAME,
llm_temperature: float = Defaults.DEFAULT_LLM_TEMPERATURE,
**kwargs,
):
from langchain_openai import ChatOpenAI
return ChatOpenAI(model=llm_name, temperature=llm_temperature, **kwargs)
You can also overwrite the `LLM_MAP` values for e.g. the OpenAI models if, for example,
you want to use an in-house wrapper for Langchain or Llamaindex model adapters
(for example, to use an internal gateway instead of directly hitting the OpenAI endpoints).

Note that at present, if you use Autogen with motleycrew, you will need to separately control
the models that Autogen uses, using the Autogen-specific APIs.
2 changes: 1 addition & 1 deletion docs/source/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ Usage
key_concepts
agents
knowledge_graph
another_llm
choosing_llms
caching_observability
152 changes: 0 additions & 152 deletions examples/Using another LLM.ipynb

This file was deleted.

6 changes: 2 additions & 4 deletions motleycrew/common/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,12 @@
from .defaults import Defaults
from .enums import AsyncBackend
from .enums import GraphStoreType
from .enums import LLMFamily
from .enums import LLMFramework
from .enums import LLMProvider
from .enums import LunaryEventName
from .enums import LunaryRunType
from .enums import TaskUnitStatus

from .logging import logger, configure_logging

from .types import MotleyAgentFactory
from .types import MotleySupportedTool

Expand All @@ -22,7 +20,7 @@
"configure_logging",
"AsyncBackend",
"GraphStoreType",
"LLMFamily",
"LLMProvider",
"LLMFramework",
"LunaryEventName",
"LunaryRunType",
Expand Down
5 changes: 2 additions & 3 deletions motleycrew/common/defaults.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
from motleycrew.common.enums import GraphStoreType
from motleycrew.common.enums import LLMFamily
from motleycrew.common.enums import LLMProvider


class Defaults:
"""Default values for various settings."""

DEFAULT_REACT_AGENT_MAX_ITERATIONS = 15
DEFAULT_LLM_FAMILY = LLMFamily.OPENAI
DEFAULT_LLM_PROVIDER = LLMProvider.OPENAI
DEFAULT_LLM_NAME = "gpt-4o"
DEFAULT_LLM_TEMPERATURE = 0.0
LLM_MAP = {}

DEFAULT_GRAPH_STORE_TYPE = GraphStoreType.KUZU

Expand Down
2 changes: 1 addition & 1 deletion motleycrew/common/enums.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""Various enums used in the project."""


class LLMFamily:
class LLMProvider:
OPENAI = "openai"
ANTHROPIC = "anthropic"
REPLICATE = "replicate"
Expand Down
10 changes: 5 additions & 5 deletions motleycrew/common/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
from motleycrew.common import Defaults


class LLMFamilyNotSupported(Exception):
"""Raised when an LLM family is not supported in motleycrew via a framework."""
class LLMProviderNotSupported(Exception):
"""Raised when an LLM provider is not supported in motleycrew via a framework."""

def __init__(self, llm_framework: str, llm_family: str):
def __init__(self, llm_framework: str, llm_provider: str):
self.llm_framework = llm_framework
self.llm_family = llm_family
self.llm_provider = llm_provider

def __str__(self) -> str:
return f"LLM family `{self.llm_family}` is not supported via the framework `{self.llm_framework}`"
return f"LLM provider `{self.llm_provider}` is not supported via the framework `{self.llm_framework}`"


class LLMFrameworkNotSupported(Exception):
Expand Down
Loading

0 comments on commit 5c2a59c

Please sign in to comment.