Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added xai #45

Merged
merged 2 commits into from
Jan 1, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .env.dev
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ LLM_PROVIDER=openai
OPENAI_MODEL=gpt-4o-mini
OPENAI_API_KEY=

XAI_MODEL=grok-2-latest
XAI_API_KEY=

# === Third-party services settings ===

# Perplexity
Expand Down
8 changes: 4 additions & 4 deletions docs/agent/llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Large Language Models are the backbone of the Autonomous Agent. They are the cor

The LLM integration is primarily handled through the `src/llm` directory, which provides:

- API interaction with OpenAI/Anthropic models
- API interaction with OpenAI/Anthropic/xAI models
- Embeddings generation for memory storage
- Context management
- Response processing & generation
Expand All @@ -21,7 +21,7 @@ For embeddings generation we recommend using OpenAI's [`text-embedding-3-large`]

### 2. Response Processing & Generation

The agent uses the LLM class to generate responses through different providers (currently supported: OpenAI or Anthropic). The `src/llm/llm.py` module provides:
The agent uses the LLM class to generate responses through different providers. The `src/llm/llm.py` module provides:

- Unified interface for multiple LLM providers through the `LLM` class
- Automatic system message injection with agent personality and goals
Expand All @@ -33,13 +33,13 @@ The agent uses the LLM class to generate responses through different providers (

## Configuration

Currently, the agent is configured to use OpenAI's `gpt-4o` model, but it can be easily configured to use other models (e.g. `gpt-4o-mini`, `gpt-4`, as well as Anthropic's models like `claude-3-5-sonnet` model).
Currently, the agent is configured to use OpenAI's `gpt-4o` model, but it can be easily configured to use other models (e.g. `gpt-4o-mini`, `gpt-4`, Anthropic's models like `claude-3-5-sonnet` or xAI's `grok-2-latest` model).

### Environment Variables
To choose model of your choice, set the following environment variables:
```
OPENAI_API_KEY=your-api-key
OPENAI_MODEL=gpt-4 # or other supported models
OPENAI_MODEL=gpt-4 # or other supported models (e.g. gpt-4o-mini, gpt-4o, claude-3-5-sonnet, grok-2-latest)
OPENAI_EMBEDDING_MODEL=text-embedding-3-large # or other supported embedding models
```

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This framework is built on top of:
## Core Features

- **Autonomous Decision Making**: Nevron uses Q-learning algorithm for intelligent decision making
- **LLM Integration**: Powered by OpenAI & Anthropic Large Language Models
- **LLM Integration**: Powered by a wide range of Large Language Models (e.g., OpenAI, Anthropic, xAI, etc.)
- **Modular Workflows**: Predefined autonomous agent task execution patterns
- Analyze signal workflow
- Research news workflow
Expand Down
3 changes: 3 additions & 0 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ Required environment variables:
OPENAI_API_KEY=your_key_here # Required for embeddings
ENVIRONMENT=development # Set environment (development or production)

# xAI API (optional)
XAI_API_KEY=

# Perplexity API (optional)
PERPLEXITY_API_KEY=

Expand Down
4 changes: 4 additions & 0 deletions src/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,10 @@ class Settings(BaseSettings):
OPENAI_MODEL: str = "gpt-4o-mini"
OPENAI_EMBEDDING_MODEL: str = "text-embedding-3-small"

#: xAI
XAI_API_KEY: str = ""
XAI_MODEL: str = "grok-2-latest"

# ==========================
# Agent settings
# ==========================
Expand Down
1 change: 1 addition & 0 deletions src/core/defs.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,4 @@ class LLMProviderType(str, Enum):

OPENAI = "openai"
ANTHROPIC = "anthropic"
XAI = "xai"
5 changes: 4 additions & 1 deletion src/llm/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from src.core.exceptions import LLMError
from src.llm.providers.anthropic import call_anthropic
from src.llm.providers.oai import call_openai
from src.llm.providers.xai import call_xai


class LLM:
Expand All @@ -18,7 +19,7 @@ class LLM:
def __init__(self):
"""
Initialize the LLM class based on the selected provider from settings.
Supported providers: 'openai', 'anthropic'
Supported providers: 'openai', 'anthropic', 'xai'
"""
self.provider = settings.LLM_PROVIDER
logger.debug(f"Using LLM provider: {self.provider}")
Expand Down Expand Up @@ -46,6 +47,8 @@ async def generate_response(self, messages: List[Dict[str, Any]], **kwargs) -> s
return await call_openai(messages, **kwargs)
elif self.provider == LLMProviderType.ANTHROPIC:
return await call_anthropic(messages, **kwargs)
elif self.provider == LLMProviderType.XAI:
return await call_xai(messages, **kwargs)
else:
raise LLMError(f"Unknown LLM provider: {self.provider}")

Expand Down
43 changes: 43 additions & 0 deletions src/llm/providers/xai.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from typing import Dict, List

import openai
from loguru import logger

from src.core.config import settings
from src.core.exceptions import LLMError


async def call_xai(messages: List[Dict[str, str]], **kwargs) -> str:
"""
Call the xAI ChatCompletion endpoint.

Args:
messages: A list of dicts with 'role' and 'content'.
kwargs: Additional parameters (e.g., model, temperature).

Returns:
str: Response content from xAI.
"""
#: OpenAI client
client = openai.AsyncOpenAI(api_key=settings.XAI_API_KEY)

model = kwargs.get("model", settings.XAI_MODEL)
temperature = kwargs.get("temperature", 0.2)

logger.debug(f"Calling xAI with model={model}, temperature={temperature}, messages={messages}")

try:
response = await client.chat.completions.create(
model=model,
messages=messages, # type: ignore
temperature=temperature,
)
if not response.choices[0].message.content:
raise LLMError("No content in xAI response")

content = response.choices[0].message.content.strip()
logger.debug(f"xAI response: {content}")
return content
except Exception as e:
logger.error(f"xAI call failed: {e}")
raise LLMError("Error during xAI API call") from e
71 changes: 71 additions & 0 deletions tests/llm/providers/test_xai.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
from unittest.mock import AsyncMock, MagicMock, patch

import pytest

from src.core.exceptions import LLMError
from src.llm.providers.xai import call_xai


@pytest.mark.asyncio
async def test_call_xai_success():
"""Test a successful call to xAI."""
# mock the message response
mock_message = MagicMock()
mock_message.content = "This is a mock response from xAI."

mock_choice = MagicMock()
mock_choice.message = mock_message

mock_response = MagicMock()
mock_response.choices = [mock_choice]

# mock the openai client
mock_client = AsyncMock()
mock_client.chat.completions.create.return_value = mock_response

# patch the openai client constructor
with patch("src.llm.providers.xai.openai.AsyncOpenAI", return_value=mock_client):
messages = [{"role": "user", "content": "Test message"}]
result = await call_xai(messages, model="grok-2-latest", temperature=0.7)

assert result == "This is a mock response from xAI."
mock_client.chat.completions.create.assert_called_once_with(
model="grok-2-latest",
messages=messages,
temperature=0.7,
)


@pytest.mark.asyncio
async def test_call_xai_no_content():
"""Test when xAI returns no content in the response."""
mock_message = MagicMock()
mock_message.content = ""

mock_choice = MagicMock()
mock_choice.message = mock_message

mock_response = MagicMock()
mock_response.choices = [mock_choice]

mock_client = AsyncMock()
mock_client.chat.completions.create.return_value = mock_response

with patch("src.llm.providers.xai.openai.AsyncOpenAI", return_value=mock_client):
messages = [{"role": "user", "content": "Test message"}]

with pytest.raises(LLMError):
await call_xai(messages)


@pytest.mark.asyncio
async def test_call_xai_exception():
"""Test when xAI raises an exception."""
mock_client = AsyncMock()
mock_client.chat.completions.create.side_effect = Exception("API call failed")

with patch("src.llm.providers.xai.openai.AsyncOpenAI", return_value=mock_client):
messages = [{"role": "user", "content": "Test message"}]

with pytest.raises(LLMError, match="Error during xAI API call"):
await call_xai(messages)
Loading