Skip to content

Commit

Permalink
Add new adapters, update interface, finalize 2.0 (#9)
Browse files Browse the repository at this point in the history
* begin anthropic adapter implementation

* add non-streaming claude implementation

* complete basic adapter

* add anthropic vertex support

* add bedrock support

* add error retry logic

* version bump

* remove bedrock and refactor vertex anthropic

* add gemini vertex_ai adapter and fix gemini bugs

* add platform agnostic vertex constructor

* refactor example

* version bump

* refactor files and folder, minor tweaks

* rewrite docs and rename model to avoid namespacing

* remove namespacing and refactor docs for adapters

* refactor examples

* update docs & version bump
  • Loading branch information
dev-mush authored Sep 23, 2024
1 parent 783a8be commit eb7535c
Show file tree
Hide file tree
Showing 46 changed files with 2,789 additions and 1,406 deletions.
105 changes: 9 additions & 96 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,104 +8,19 @@ _My name is Bot, JAIms Bot._ 🕶️

JAIms is a lightweight Python package that lets you build powerful LLM-Based agents or LLM powered applications with ease. It is platform agnostic, so you can focus on integrating AI into your software and let JAIms handle the boilerplate of communicating with the LLM API.
The main goal of JAIms is to provide a simple and easy-to-use interface to leverage the power of LLMs in your software, without having to worry about the specifics of the underlying provider, and to seamlessly integrate LLM functionality with your own codebase.
JAIms natively supports OpenAI's GPT models, Google's gemini models and Mistral models, and it can be easily extended to connect to your own model and endpoints.
JAIms currently supports mainstream foundation LLMs such as OpenAI's GPT models, Google's gemini models (also on Vertex), Mistral models and Anthropic Models (both hosted on Anthropic and Vertex endpoints). JAIms can be easily extended to connect to your own model and endpoints.

## Installation
Check out the [getting started guide](docs/getting_started.md) to quickly get up and running with JAIms.

To avoid overcluttering your project with dependencies, by running:

```bash
pip install jaims-py
```

You will get the core package that is provider independent (meaning, it won't install any dependencies other than Pillow and Pydantic). In order to also install the built in providers (currently openai, google and mistral) you can run:

```bash
pip install jaims-py[openai,google,mistral]
```

## 👨‍💻 Usage

Building an agent is as simple as this:

```python
from jaims import JAImsAgent, JAImsMessage

agent = JAImsAgent.build(
model="gpt-4o",
provider="openai",
)

response = agent.run([JAImsMessage.user_message("Hello, how are you?")])

print(response)
```

### ⚙️ Function Tools

Of course, an agent is just a chatbot if it doesn't support functions. JAIms leverages LLMs function calling features seamlessly integrating with your python code.
It can both invoke your python functions, or use a platform agnostic tool descriptor to return formatted results that are easily consumed by your code (using pydantic models).

#### Function Invocation

```python
from jaims import JAImsAgent, JAImsMessage, jaimsfunctiontool

@jaimsfunctiontool()
def sum(a: int, b: int):
print("invoked sum function")
return a + b

agent = JAImsAgent.build(
model="gpt-4o",
provider="openai",
tools=[sum],
)

response = agent.message([JAImsMessage.user_message("What is the sum of 42 and 420?")])
print(response)
```

#### Formatted Results

```python
import jaims
from typing import Optional


class MotivationalQuote(jaims.BaseModel):
quote: str = jaims.Field(description="a motivational quote")
author: Optional[str] = jaims.Field(
default=None, description="the author of the quote, omit if it's your own"
)


tool_descriptor = jaims.JAImsFunctionToolDescriptor(
name="store_motivational_quote",
description="use this tool to store a random motivational quote based on user's preferences",
params=MotivationalQuote,
)


random_quote = jaims.JAImsAgent.run_tool_model(
model="gpt-4o",
provider="openai",
descriptor=tool_descriptor,
messages=[
jaims.JAImsMessage.user_message("Motivate me in becoming a morning person.")
],
)
print(f"Quote: {random_quote.quote}\nAuthor: {random_quote.author or 'By an AI Poet'}")
```

But there is much more, check outh the examples folder for more advanced or nuanced use cases.
Also consider checking out the [examples](examples) folder for more advanced use cases.

### ✨ Main Features

- Built in support for OpenAI, Google's gemini and Mistral models (more coming soon).
- Function calling support even in streamed conversations with built in providers (openai, google, mistral).
- Built in support for most common foundational models.
- Built in conversation history management to allow fast creation of chatbots, this can be easily extended to support more advanced history management strategies.
- Image support for multimodal LLMs 🖼️
- Image support for multimodal LLMs 🖼️.
- Support for function calling, both streamed and non-streamed.
- Fast integration with dataclasses and pydantic models.
- Error handling and exponential backoff for built in providers (openai, google, mistral)

### 🧠 Guiding Principles
Expand All @@ -120,15 +35,13 @@ In case you like to contribute, please keep in mind that I try to keep the code:
- **Application focused**: I'm not trying to build a library similar to langchain or llamaindex to perform data-driven operations on LLMs, I'm trying to build a very simple and lightweight framework that leverages the possibility of LLMs to perform function calling so that LLMs can easily be integrated in software applications.
- **Extensible**: I'm planning to add more providers and more features.

As a side note, I've just recently begun to employ Python for production code, therefore I might have "contaminated" the codebase with some approaches, patterns or choices that might not be idiomatic or "pythonic", I'm more than happy to receive feedback and open to suggestions on how to make the codebase cleaner and more idiomatic, hopefully without too many breaking changes.

## ⚠️ Project status

I'm using this library in many of my projects without problems, that said I've just revamped it entirely to support multiple providers and entirely refactored the codebase to streamline function calling. I've done my best to test it thoroughly, but I can't guarantee something won't break.

I'm actively working on this project and I'm open to contributions, so feel free to open an issue or a PR if you find something that needs fixing or improving.
In the [roadmap](docs/roadmap.md) I'm tracking the next steps I'm planning to take to improve the library.

My [next steps](docs/roadmap.md#-next---high-priority) will be to improve tests and documentation, and to extend the built in providers to support more models.
I'm actively working on this project and I'm open to contributions, so feel free to open an issue or a PR if you find something that needs fixing or improving.

Since I've started the development of JAIms, a few similar projects have been started, and granted that I didn't have time to check them out yet, some might easily be more advanced, yet I've widely employed this library in my projects and those of the company I work for, and I've been actively maintaining it, so I'm planning to keep it up to date and to improve it as much as I can.

Expand Down
111 changes: 111 additions & 0 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# JAIms - Getting Started Guide

## Installation

To install JAIms, use pip:

```bash
pip install jaims-py
```

Note: Installing `jaims-py` alone doesn't include the built-in providers (this is a deliberate choice to avoid overcluttering dependencies in a specific python project). To use specific providers, you need to install their dependencies separately.

For example, to install OpenAI support:

```bash
pip install jaims-py[openai]
```

You can install multiple providers at once:

```bash
pip install jaims-py[openai,google,mistral,anthropic]
```

## Basic Usage

Here's a simple example using OpenAI's GPT-4 model:

```python
from jaims import Agent, Message, LLMParams, Config

# Create an agent
agent = Agent.build(
model="gpt-4o",
provider="openai",
llm_params=LLMParams(temperature=0.7, max_tokens=150)
)

# Send a message and get a response
user_message = Message.user_message("Hello, can you tell me a joke?")
response = agent.message([user_message])

print(response)
```

## Function Calling Example

JAIms supports function calling. Here's a simple example:

```python
from jaims import Agent, Message, jaimsfunctiontool

# Define a function tool
@jaimsfunctiontool(
description="Calculates the sum of two numbers",
params_descriptions={"a": "first number", "b": "second number"}
)
def add_numbers(a: int, b: int):
return a + b

# Create an agent with the function tool
agent = Agent.build(
model="gpt-4",
provider="openai",
tools=[add_numbers]
)

# Use the function in a conversation
user_message = Message.user_message("What's the sum of 5 and 7?")
response = agent.message([user_message])

print(response)
```

In this example, the agent can use the `add_numbers` function when appropriate to perform calculations.

## Fast Calling Example

JAIms also supports fast calling for quick, one-off function executions. Here's an example:

```python
from jaims import Agent, Message, BaseModel, Field, FunctionToolDescriptor

# Define a model for the function output
class SumResult(BaseModel):
result: int = Field(description="The sum of the two numbers")

# Create a function tool descriptor
sum_numbers = FunctionToolDescriptor(
name="sum_numbers",
description="Calculates the sum of two numbers",
params=SumResult
)

# Use fast calling to execute the function
result = Agent.run_tool_model(
model="gpt-4",
provider="openai",
descriptor=sum_numbers,
messages=[Message.user_message("What's the sum of 10 and 15?")]
)

print(f"The sum is: {result.result}")
```

This example demonstrates how to use fast calling to quickly execute a function and get a structured result.

## Next Steps

- Explore more advanced features like streaming responses and complex function tools
- Check out the full documentation for detailed information on all features
4 changes: 2 additions & 2 deletions docs/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ Here I track the next features and improvements that I plan to implement in the

- Improve Docs
- Add Logging
- Add Anthropic support
- Add VLLM support
- Improve Tests
- Configure CI tests and deployment with Github Actions

## 📅 Future - Medium Priority

- Add config from yaml
- Add config and run from .prompt file
- Add config and run from .prompt file
- Improve image passing: remove simple str, should be either an Image.Image, an url or a b64 with mime
22 changes: 0 additions & 22 deletions examples/_examples_utils.py

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
from jaims import (
JAImsAgent,
JAImsDefaultHistoryManager,
JAImsMessage,
JAImsLLMConfig,
Agent,
DefaultHistoryManager,
Message,
LLMParams,
Config,
jaimsfunctiontool,
)

Expand All @@ -12,7 +13,7 @@
params_descriptions={"a": "the first operand", "b": "the second operand"},
)
def sum(a: int, b: int):
print("----performing sum----")
print("\n----performing sum----")
print(a, b)
print("----------------------")
return a + b
Expand All @@ -23,47 +24,44 @@ def sum(a: int, b: int):
params_descriptions={"a": "the first operand", "b": "the second operand"},
)
def multiply(a: int, b: int):
print("----performing multiplication----")
print("\n----performing multiplication----")
print(a, b)
print("----------------------------------")
return a * b


@jaimsfunctiontool(
description="use this function when the user wants to multiply two numbers"
description="use this function when the user wants to store the result of an operation",
)
def store_sum(result: int):
print("----storing sum----")
def store_result(result: int):
print("\n----storing result----")
print(result)
print("-------------------")


def main():
stream = True
model = "gpt-4o" # use JAImsModelCode.GPT_4o to avoid typing / remembering the model name
# model = "gemini-1.5-pro-latest"
provider = "openai" # either "openai" or "google"
# provider = "google"
model = "claude-3-5-sonnet@20240620"
provider = "vertex"

agent = JAImsAgent.build(
agent = Agent.build(
model=model,
provider=provider,
config=JAImsLLMConfig(
config=Config(
platform_specific_options={
"project_id": "your-project-id",
"location": "europe-west1",
}
),
llm_params=LLMParams(
temperature=0.5,
max_tokens=2000,
),
history_manager=JAImsDefaultHistoryManager(
history=[
JAImsMessage.assistant_message(
text="Hello, I am JAIms, your personal assistant."
),
JAImsMessage.assistant_message(text="How can I help you today?"),
]
),
history_manager=DefaultHistoryManager(),
tools=[
sum,
multiply,
store_sum,
store_result,
],
)

Expand All @@ -76,14 +74,14 @@ def main():

if stream:
response = agent.message_stream(
[JAImsMessage.user_message(text=user_input)],
[Message.user_message(text=user_input)],
)
for chunk in response:
print(chunk, end="", flush=True)
print("\n")
else:
response = agent.message(
[JAImsMessage.user_message(text=user_input)],
[Message.user_message(text=user_input)],
)
print(response)

Expand Down
Loading

0 comments on commit eb7535c

Please sign in to comment.