Skip to content

Commit df2f322

Browse files
committed
Bring in changes from main
1 parent 68c113d commit df2f322

File tree

6 files changed

+15
-8
lines changed

6 files changed

+15
-8
lines changed

.github/workflows/publish.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
runs-on: ubuntu-latest
1313
strategy:
1414
matrix:
15-
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
15+
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
1616
steps:
1717
- uses: actions/checkout@v4
1818
- name: Set up Python ${{ matrix.python-version }}
@@ -38,7 +38,7 @@ jobs:
3838
- name: Set up Python
3939
uses: actions/setup-python@v5
4040
with:
41-
python-version: '3.12'
41+
python-version: '3.13'
4242
cache: pip
4343
cache-dependency-path: setup.py
4444
- name: Install dependencies

.github/workflows/test.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ jobs:
1111
strategy:
1212
matrix:
1313
os: [ubuntu-latest, macos-latest, windows-latest]
14-
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
14+
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
1515
pydantic: ["==1.10.2", ">=2.0.0"]
1616
steps:
1717
- uses: actions/checkout@v4

docs/plugins/directory.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,8 @@ The following plugins are available for LLM. Here's {ref}`how to install them <i
77

88
These plugins all help you run LLMs directly on your own computer:
99

10-
- **[llm-llama-cpp](https://github.com/simonw/llm-llama-cpp)** uses [llama.cpp](https://github.com/ggerganov/llama.cpp) to run models published in the GGUF format.
10+
11+
- **[llm-gguf](https://github.com/simonw/llm-gguf)** uses [llama.cpp](https://github.com/ggerganov/llama.cpp) to run models published in the GGUF format.
1112
- **[llm-mlc](https://github.com/simonw/llm-mlc)** can run local models released by the [MLC project](https://mlc.ai/mlc-llm/), including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
1213
- **[llm-gpt4all](https://github.com/simonw/llm-gpt4all)** adds support for various models released by the [GPT4All](https://gpt4all.io/) project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here's [a full list of models](https://observablehq.com/@simonw/gpt4all-models).
1314
- **[llm-mpt30b](https://github.com/simonw/llm-mpt30b)** adds support for the [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) local model.
@@ -52,6 +53,7 @@ If an API model host provides an OpenAI-compatible API you can also [configure L
5253
- **[llm-cmd](https://github.com/simonw/llm-cmd)** accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit `<enter>` to execute or `ctrl+c` to cancel.
5354
- **[llm-python](https://github.com/simonw/llm-python)** adds a `llm python` command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM {ref}`python-api` if you installed LLM using Homebrew or `pipx`.
5455
- **[llm-cluster](https://github.com/simonw/llm-cluster)** adds a `llm cluster` command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.
56+
- **[llm-jq](https://github.com/simonw/llm-jq)** lets you pipe in JSON data and a prompt describing a `jq` program, then executes the generated program against the JSON.
5557

5658
## Just for fun
5759

setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,5 +64,5 @@ def get_long_description():
6464
"types-setuptools",
6565
]
6666
},
67-
python_requires=">=3.8",
67+
python_requires=">=3.9",
6868
)

tests/test_keys.py

+3
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,9 @@ def test_keys_list(monkeypatch, tmpdir, args):
5252
assert result2.output.strip() == "openai"
5353

5454

55+
@pytest.mark.httpx_mock(
56+
assert_all_requests_were_expected=False, can_send_already_matched_responses=True
57+
)
5558
def test_uses_correct_key(mocked_openai_chat, monkeypatch, tmpdir):
5659
user_dir = tmpdir / "user-dir"
5760
pathlib.Path(user_dir).mkdir()

tests/test_templates.py

+5-3
Original file line numberDiff line numberDiff line change
@@ -133,19 +133,21 @@ def test_templates_prompt_save(templates_path, args, expected_prompt, expected_e
133133
"Summarize this: Input text",
134134
None,
135135
),
136-
(
136+
pytest.param(
137137
"boo",
138138
["-s", "s"],
139139
None,
140140
None,
141141
"Error: Cannot use -t/--template and --system together",
142+
marks=pytest.mark.httpx_mock(assert_all_responses_were_requested=False),
142143
),
143-
(
144+
pytest.param(
144145
"prompt: 'Say $hello'",
145146
[],
146147
None,
147148
None,
148149
"Error: Missing variables: hello",
150+
marks=pytest.mark.httpx_mock(assert_all_responses_were_requested=False),
149151
),
150152
(
151153
"prompt: 'Say $hello'",
@@ -183,4 +185,4 @@ def test_template_basic(
183185
else:
184186
assert result.exit_code == 1
185187
assert result.output.strip() == expected_error
186-
mocked_openai_chat.reset(assert_all_responses_were_requested=False)
188+
mocked_openai_chat.reset()

0 commit comments

Comments
 (0)