Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial support for attachments for multi-modal models #590

Merged
merged 18 commits into from
Oct 28, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Bring in changes from main
simonw committed Oct 28, 2024
commit df2f32251a9f015dc8c48b7313a0a6c2f4330309
4 changes: 2 additions & 2 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
@@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
@@ -38,7 +38,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
python-version: '3.13'
cache: pip
cache-dependency-path: setup.py
- name: Install dependencies
2 changes: 1 addition & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
pydantic: ["==1.10.2", ">=2.0.0"]
steps:
- uses: actions/checkout@v4
4 changes: 3 additions & 1 deletion docs/plugins/directory.md
Original file line number Diff line number Diff line change
@@ -7,7 +7,8 @@ The following plugins are available for LLM. Here's {ref}`how to install them <i

These plugins all help you run LLMs directly on your own computer:

- **[llm-llama-cpp](https://github.com/simonw/llm-llama-cpp)** uses [llama.cpp](https://github.com/ggerganov/llama.cpp) to run models published in the GGUF format.

- **[llm-gguf](https://github.com/simonw/llm-gguf)** uses [llama.cpp](https://github.com/ggerganov/llama.cpp) to run models published in the GGUF format.
- **[llm-mlc](https://github.com/simonw/llm-mlc)** can run local models released by the [MLC project](https://mlc.ai/mlc-llm/), including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
- **[llm-gpt4all](https://github.com/simonw/llm-gpt4all)** adds support for various models released by the [GPT4All](https://gpt4all.io/) project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here's [a full list of models](https://observablehq.com/@simonw/gpt4all-models).
- **[llm-mpt30b](https://github.com/simonw/llm-mpt30b)** adds support for the [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) local model.
@@ -52,6 +53,7 @@ If an API model host provides an OpenAI-compatible API you can also [configure L
- **[llm-cmd](https://github.com/simonw/llm-cmd)** accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit `<enter>` to execute or `ctrl+c` to cancel.
- **[llm-python](https://github.com/simonw/llm-python)** adds a `llm python` command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM {ref}`python-api` if you installed LLM using Homebrew or `pipx`.
- **[llm-cluster](https://github.com/simonw/llm-cluster)** adds a `llm cluster` command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.
- **[llm-jq](https://github.com/simonw/llm-jq)** lets you pipe in JSON data and a prompt describing a `jq` program, then executes the generated program against the JSON.

## Just for fun

2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -64,5 +64,5 @@ def get_long_description():
"types-setuptools",
]
},
python_requires=">=3.8",
python_requires=">=3.9",
)
3 changes: 3 additions & 0 deletions tests/test_keys.py
Original file line number Diff line number Diff line change
@@ -52,6 +52,9 @@ def test_keys_list(monkeypatch, tmpdir, args):
assert result2.output.strip() == "openai"


@pytest.mark.httpx_mock(
assert_all_requests_were_expected=False, can_send_already_matched_responses=True
)
def test_uses_correct_key(mocked_openai_chat, monkeypatch, tmpdir):
user_dir = tmpdir / "user-dir"
pathlib.Path(user_dir).mkdir()
8 changes: 5 additions & 3 deletions tests/test_templates.py
Original file line number Diff line number Diff line change
@@ -133,19 +133,21 @@ def test_templates_prompt_save(templates_path, args, expected_prompt, expected_e
"Summarize this: Input text",
None,
),
(
pytest.param(
"boo",
["-s", "s"],
None,
None,
"Error: Cannot use -t/--template and --system together",
marks=pytest.mark.httpx_mock(assert_all_responses_were_requested=False),
),
(
pytest.param(
"prompt: 'Say $hello'",
[],
None,
None,
"Error: Missing variables: hello",
marks=pytest.mark.httpx_mock(assert_all_responses_were_requested=False),
),
(
"prompt: 'Say $hello'",
@@ -183,4 +185,4 @@ def test_template_basic(
else:
assert result.exit_code == 1
assert result.output.strip() == expected_error
mocked_openai_chat.reset(assert_all_responses_were_requested=False)
mocked_openai_chat.reset()