Skip to content

Commit

Permalink
✨ model and provider upgrades
Browse files Browse the repository at this point in the history
  • Loading branch information
juftin committed Jul 18, 2024
1 parent 96b8b88 commit 48ecc61
Show file tree
Hide file tree
Showing 6 changed files with 283 additions and 157 deletions.
45 changes: 31 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ Chat with LLM models directly from the command line.

https://github.com/juftin/llm-term/assets/49741340/c305f636-dfcf-4d6f-884f-81d378cf0684


</details>

<h2><a href="https://juftin.com/llm-term">Check Out the Docs</a></h2>
Expand All @@ -30,6 +29,24 @@ https://github.com/juftin/llm-term/assets/49741340/c305f636-dfcf-4d6f-884f-81d37
pipx install llm-term
```

### Install with Extras

You can install llm-term with extra dependencies for different providers:

```bash
pipx install "llm-term[anthropic]"
```

```bash
pipx install "llm-term[mistralai]"
```

Or, you can install all the extras:

```bash
pipx install "llm-term[all]"
```

## Usage

Then, you can chat with the model directly from the command line:
Expand Down Expand Up @@ -57,10 +74,10 @@ export LLM_API_KEY="xxxxxxxxxxxxxx"
```

Optionally, you can set a custom model. llm-term defaults
to `gpt-3.5-turbo` (this can also set via the `--model` / `-m` flag in the CLI):
to `gpt-4o` (this can also set via the `--model` / `-m` flag in the CLI):

```shell
export LLM_MODEL="gpt-4"
export LLM_MODEL="gpt-4o-mini"
```

Want to start the conversion directly from the command line? No problem,
Expand All @@ -83,13 +100,13 @@ export LLM_SYSTEM_MESSAGE="You are a helpful assistant who talks like a pirate."
### OpenAI

By default, llm-term uses OpenAI as your LLM provider. The default model is
`gpt-3.5-turbo` and you can also use the `OPENAI_API_KEY` environment variable
`gpt-4o` and you can also use the `OPENAI_API_KEY` environment variable
to set your API key.

### Anthropic

You can request access to Anthropic [here](https://www.anthropic.com/). The
default model is `claude-2.1`, and you can use the `ANTHROPIC_API_KEY` environment
default model is `claude-3-5-sonnet-20240620`, and you can use the `ANTHROPIC_API_KEY` environment
variable. To use `anthropic` as your provider you must install the `anthropic`
extra.

Expand All @@ -105,7 +122,7 @@ llm-term --provider anthropic

You can request access to the [MistralAI](https://mistral.ai/)
[here](https://console.mistral.ai/). The default model is
`mistral-small`, and you can use the `MISTRAL_API_KEY` environment variable.
`mistral-small-latest`, and you can use the `MISTRAL_API_KEY` environment variable.

```shell
pipx install "llm-term[mistralai]"
Expand All @@ -115,18 +132,18 @@ pipx install "llm-term[mistralai]"
llm-term --provider mistralai
```

### GPT4All
### Ollama

GPT4All is a an open source LLM provider. These models run locally on your
Ollama is a an open source LLM provider. These models run locally on your
machine, so you don't need to worry about API keys or rate limits. The default
model is `mistral-7b-openorca.Q4_0.gguf`, and you can see what models are available on the [GPT4All
Website](https://gpt4all.io/index.html). Models are downloaded automatically when you first use them.
To use GPT4All as your provider you must install the `gpt4all` extra.
model is `llama3`, and you can see what models are available on the [Ollama
Website](https://ollama.com/library). Make sure to
[download Ollama](https://ollama.com/download) first.

```bash
pipx install "llm-term[gpt4all]"
```shell
ollama pull llama3
```

```shell
llm-term --provider gpt4all --model mistral-7b-openorca.Q4_0.gguf
llm-term --provider ollama --model llama3
```
56 changes: 37 additions & 19 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Chat with LLM models directly from the command line.

<p align="center">
<img width="600" alt="image" src="https://i.imgur.com/1BUegLB.png">
<img width="600" alt="image" src="https://i.imgur.com/453xL6I.png">
</p>

[![PyPI](https://img.shields.io/pypi/v/llm-term?color=blue&label=🤖%20llm-term)](https://github.com/juftin/llm-term)
Expand All @@ -28,6 +28,24 @@ Chat with LLM models directly from the command line.
pipx install llm-term
```

### Install with Extras

You can install llm-term with extra dependencies for different providers:

```bash
pipx install "llm-term[anthropic]"
```

```bash
pipx install "llm-term[mistralai]"
```

Or, you can install all the extras:

```bash
pipx install "llm-term[all]"
```

## Usage

Then, you can chat with the model directly from the command line:
Expand Down Expand Up @@ -55,10 +73,10 @@ export LLM_API_KEY="xxxxxxxxxxxxxx"
```

Optionally, you can set a custom model. llm-term defaults
to `gpt-3.5-turbo` (this can also set via the `--model` / `-m` flag in the CLI):
to `gpt-4o` (this can also set via the `--model` / `-m` flag in the CLI):

```shell
export LLM_MODEL="gpt-4"
export LLM_MODEL="gpt-4o-mini"
```

Want to start the conversion directly from the command line? No problem,
Expand All @@ -81,15 +99,15 @@ export LLM_SYSTEM_MESSAGE="You are a helpful assistant who talks like a pirate."
### OpenAI

By default, llm-term uses OpenAI as your LLM provider. The default model is
`gpt-3.5-turbo` and you can also use the `OPENAI_API_KEY` environment variable
`gpt-4o` and you can also use the `OPENAI_API_KEY` environment variable
to set your API key.

### Anthropic

Anthropic is a new LLM provider that is currently in private beta. You can
request access to the beta [here](https://www.anthropic.com/). The default
model is `claude`, and you can use the `ANTHROPIC_API_KEY` environment variable.
To use `anthropic` as your provider you must install the `anthropic` extra.
You can request access to Anthropic [here](https://www.anthropic.com/). The
default model is `claude-3-5-sonnet-20240620`, and you can use the `ANTHROPIC_API_KEY` environment
variable. To use `anthropic` as your provider you must install the `anthropic`
extra.

```shell
pipx install "llm-term[anthropic]"
Expand All @@ -101,9 +119,9 @@ llm-term --provider anthropic

### MistralAI

[MistralAI](https://mistral.ai/) is a European LLM provider. You can request
access to the MistralAI [here](https://console.mistral.ai/). The default model is
`mistral-small`, and you can use the `MISTRAL_API_KEY` environment variable.
You can request access to the [MistralAI](https://mistral.ai/)
[here](https://console.mistral.ai/). The default model is
`mistral-small-latest`, and you can use the `MISTRAL_API_KEY` environment variable.

```shell
pipx install "llm-term[mistralai]"
Expand All @@ -113,18 +131,18 @@ pipx install "llm-term[mistralai]"
llm-term --provider mistralai
```

### GPT4All
### Ollama

GPT4All is a an open source LLM provider. These models run locally on your
Ollama is a an open source LLM provider. These models run locally on your
machine, so you don't need to worry about API keys or rate limits. The default
model is `mistral-7b-openorca.Q4_0.gguf`, and you can see what models are available on the [GPT4All
Website](https://gpt4all.io/index.html). Models are downloaded automatically when you first use them.
To use GPT4All as your provider you must install the `gpt4all` extra.
model is `llama3`, and you can see what models are available on the [Ollama
Website](https://ollama.com/library). Make sure to
[download Ollama](https://ollama.com/download) first.

```bash
pipx install "llm-term[gpt4all]"
```shell
ollama pull llama3
```

```shell
llm-term --provider gpt4all --model mistral-7b-openorca.Q4_0.gguf
llm-term --provider ollama --model llama3
```
15 changes: 10 additions & 5 deletions llm_term/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,18 @@

from __future__ import annotations

import os
import runpy

import click
import rich.traceback
from rich.console import Console

from llm_term.__about__ import __application__, __version__
from llm_term.utils import chat_session, get_llm, print_header, providers, setup_system_message

rich.traceback.install(show_locals=True)
debug_mode = os.getenv("DEBUG", None) is not None
rich.traceback.install(show_locals=debug_mode, suppress=[click, runpy])


@click.command()
Expand All @@ -32,7 +36,7 @@
envvar="LLM_PROVIDER",
show_envvar=True,
default="openai",
type=click.Choice(providers),
type=click.Choice(list(providers.keys())),
)
@click.argument(
"chat",
Expand Down Expand Up @@ -83,8 +87,9 @@
"-a",
help="The avatar to use",
type=click.STRING,
default="🤓",
default="👤",
show_default=True,
envvar="LLM_AVATAR",
)
def cli(
model: str,
Expand All @@ -103,8 +108,8 @@ def cli(
rich_console: Console = Console(width=console)
chat_message = " ".join(chat)
try:
client, model_name = get_llm(provider=provider, api_key=api_key, model=model)
print_header(console=rich_console, model=model_name, provider=provider)
client, model_name, provider_name = get_llm(provider=provider, api_key=api_key, model=model)
print_header(console=rich_console, model=model_name, provider=provider_name)
system_message = setup_system_message(message=system)
chat_session(
client=client,
Expand Down
83 changes: 59 additions & 24 deletions llm_term/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

from pathlib import Path
from textwrap import dedent
from typing import Iterator
from typing import Iterator, TypedDict

from click.exceptions import ClickException
from langchain.llms.base import BaseLLM
Expand All @@ -16,6 +16,7 @@
from prompt_toolkit import PromptSession
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
from prompt_toolkit.history import FileHistory
from rich.align import Align
from rich.columns import Columns
from rich.console import Console
from rich.live import Live
Expand All @@ -31,58 +32,92 @@ def print_header(console: Console, model: str, provider: str) -> None:
"""
Print the header
"""
renderable = (
"[bold cyan] ✺✺✺✺ [/bold cyan]"
f"[bold red]{provider}[/bold red]"
"[bold cyan] ✺✺✺✺ [/bold cyan]"
)
console.print(
Panel(
f"[bold red]{__application__}: "
f"Chat with Language Models from the Command Line[/bold red]",
Align(renderable=renderable, align="center"),
title=f"[blue bold]{__application__} v{__version__}[/blue bold]",
subtitle=f"[yellow bold]{model} ({provider})[/yellow bold]",
subtitle=f"[yellow bold]{model}[/yellow bold]",
style="green bold",
expand=False,
),
)
console.print("")


providers: list[str] = [
"openai",
"anthropic",
"gpt4all",
"mistralai",
]
class ProviderConfig(TypedDict):
"""
Model configuration
"""

default_model: str
name: str


providers: dict[str, ProviderConfig] = {
"openai": ProviderConfig(default_model="gpt-4o", name="OpenAI"),
"anthropic": ProviderConfig(default_model="claude-3-5-sonnet-20240620", name="Anthropic"),
"mistralai": ProviderConfig(default_model="mistral-small-latest", name="MistralAI"),
"ollama": ProviderConfig(default_model="llama3", name="Ollama"),
}


def get_llm(provider: str, api_key: str, model: str | None) -> tuple[BaseChatModel | BaseLLM, str]:
def get_llm(
provider: str, api_key: str, model: str | None
) -> tuple[BaseChatModel | BaseLLM, str, str]:
"""
Check the credentials
"""
if provider == "openai":
from langchain_openai import ChatOpenAI

chat_model = model or "gpt-3.5-turbo"
return ChatOpenAI(openai_api_key=api_key, model_name=chat_model), chat_model
chat_model = model or providers[provider]["default_model"]
provider_name = providers[provider]["name"]
return ChatOpenAI(openai_api_key=api_key, model_name=chat_model), chat_model, provider_name
elif provider == "anthropic":
from langchain_anthropic import ChatAnthropic

chat_model = model or "claude-2.1"
return ChatAnthropic(anthropic_api_key=api_key, model_name=chat_model), chat_model
elif provider == "gpt4all":
from langchain_community.llms import GPT4All

chat_model = model or "mistral-7b-openorca.Q4_0.gguf"
return GPT4All(model=chat_model, allow_download=True), chat_model
try:
from langchain_anthropic import ChatAnthropic

chat_model = model or providers[provider]["default_model"]
provider_name = providers[provider]["name"]
return (
ChatAnthropic(anthropic_api_key=api_key, model_name=chat_model),
chat_model,
provider_name,
)
except ImportError as ie:
msg = (
"The `anthropic` provider requires the `anthropic` extra to be installed: "
'pipx install "llm-term[anthropic]"'
)
raise ClickException(msg) from ie
elif provider == "mistralai":
try:
from langchain_mistralai import ChatMistralAI

chat_model = model or "mistral-small"
return ChatMistralAI(mistral_api_key=api_key, model=chat_model), chat_model
chat_model = model or providers[provider]["default_model"]
provider_name = providers[provider]["name"]
return (
ChatMistralAI(mistral_api_key=api_key, model=chat_model),
chat_model,
provider_name,
)
except ImportError as ie:
msg = (
"The `mistralai` provider requires the `mistralai` extra to be installed: "
'pipx install "llm-term[mistralai]"'
)
raise ClickException(msg) from ie
elif provider == "ollama":
from langchain_community.chat_models import ChatOllama

chat_model = model or providers[provider]["default_model"]
provider_name = providers[provider]["name"]
return ChatOllama(model=chat_model), chat_model, provider_name
else:
msg = f"Provider {provider} is not supported... yet"
raise ClickException(msg)
Expand Down
Loading

0 comments on commit 48ecc61

Please sign in to comment.