Skip to content

mozilla-ai/any-llm

Repository files navigation

Project logo

any-llm

Docs Linting Unit Tests Integration Tests

Python 3.11+ PyPI Discord

A single interface to use and evaluate different llm providers.

Key Features

any-llm offers:

  • Simple, unified interface - one function for all providers, switch models with just a string change
  • Developer friendly - full type hints for better IDE support and clear, actionable error messages
  • Leverages official provider SDKs when available, reducing maintenance burden and ensuring compatibility
  • Stays framework-agnostic so it can be used across different projects and use cases
  • Actively maintained - we use this in our own product (any-agent) ensuring continued support
  • No Proxy or Gateway server required so you don't need to deal with setting up any other service to talk to whichever LLM provider you need.

Motivation

The landscape of LLM provider interfaces presents a fragmented ecosystem with several challenges that any-llm aims to address:

The Challenge with API Standardization:

While the OpenAI API has become the de facto standard for LLM provider interfaces, providers implement slight variations. Some providers are fully OpenAI-compatible, while others may have different parameter names, response formats, or feature sets. This creates a need for light wrappers that can gracefully handle these differences while maintaining a consistent interface.

Existing Solutions and Their Limitations:

  • LiteLLM: While popular, it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications
  • AISuite: Offers a clean, modular approach but lacks active maintenance, comprehensive testing, and modern Python typing standards.
  • Framework-specific solutions: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
  • Proxy Only Solutions: solutions like OpenRouter and Portkey require a hosted proxy to serve as the interface between your code and the LLM provider.

Quickstart

Requirements

  • Python 3.11 or newer
  • API_KEYS to access to whichever LLM you choose to use.

Installation

In your pip install, include the supported providers that you plan on using, or use the all option if you want to install support for all any-llm supported providers.

pip install 'any-llm-sdk[mistral,ollama]'

Make sure you have the appropriate API key environment variable set for your provider. Alternatively, you could use the api_key parameter when making a completion call instead of setting an environment variable.

export MISTRAL_API_KEY="YOUR_KEY_HERE"  # or OPENAI_API_KEY, etc

Basic Usage

The provider_id key of the model should be specified according the provider ids supported by any-llm. The model_id portion is passed directly to the provider internals: to understand what model ids are available for a provider, you will need to refer to the provider documentation.

from any_llm import completion
import os

# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')
# Basic completion
response = completion(
    model="mistral/mistral-small-latest", # <provider_id>/<model_id>
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)