Releases: simonw/llm
0.10
The two major features in this release are the llm chat command and support for embedding binary data.
llm chat
The new llm chat
command starts an ongoing chat conversation with a model in your terminal. It works with all models supported by LLM and its plugins, including locally installed models such as Llama 2. #231
This offers a big performance boost for local models, since they don't need to be freshly loaded into memory for each prompt.
Here's an example chat with Llama 2 13B, provided by the llm-mlc plugin.
llm chat -m mlc-chat-Llama-2-13b-chat-hf-q4f16_1
This starts a chat session:
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Who are you?
Hello! I'm just an AI, here to assist you with any questions you may have.
My name is LLaMA, and I'm a large language model trained to provide helpful
and informative responses to a wide range of questions and topics. I'm here
to help you find the information you need, while ensuring a safe and
respectful conversation. Please feel free to ask me anything, and I'll do my
best to provide a helpful and accurate response.
> Tell me a joke about otters
Sure, here's a joke about otters:
Why did the otter go to the party?
Because he heard it was a "whale" of a time!
(Get it? Whale, like a big sea mammal, but also a "wild" or "fun" time.
Otters are known for their playful and social nature, so it's a lighthearted
and silly joke.)
I hope that brought a smile to your face! Do you have any other questions or
topics you'd like to discuss?
> exit
Chat sessions are logged to SQLite - use llm logs
to view them. They can accept system prompts, templates and model options - consult the chat documentation for details.
Binary embedding support
LLM's embeddings feature has been expanded to provide support for embedding binary data, in addition to text. #254
This enables models like CLIP, supported by the new llm-clip plugin.
CLIP is a multi-modal embedding model which can embed images and text into the same vector space. This means you can use it to create an embedding index of photos, and then search for the embedding vector for "a happy dog" and get back images that are semantically closest to that string.
To create embeddings for every JPEG in a directory stored in a photos
collection, run:
llm install llm-clip
llm embed-multi photos --files photos/ '*.jpg' --binary -m clip
Now you can search for photos of racoons using:
llm similar photos -c 'raccoon'
This spits out a list of images, ranked by how similar they are to the string "raccoon":
{"id": "IMG_4801.jpeg", "score": 0.28125139257127457, "content": null, "metadata": null}
{"id": "IMG_4656.jpeg", "score": 0.26626441704164294, "content": null, "metadata": null}
{"id": "IMG_2944.jpeg", "score": 0.2647445926996852, "content": null, "metadata": null}
...
Also in this release
- The LLM_LOAD_PLUGINS environment variable can be used to control which plugins are loaded when
llm
starts running. #256 - The
llm plugins --all
option includes builtin plugins in the list of plugins. #259 - The
llm embed-db
family of commands has been renamed tollm collections
. #229 llm embed-multi --files
now has an--encoding
option and defaults to falling back tolatin-1
if a file cannot be processed asutf-8
. #225
0.10a1
- Support for embedding binary data. #254
llm chat
now works for models with API keys. #247llm chat -o
for passing options to a model. #244llm chat --no-stream
option. #248LLM_LOAD_PLUGINS
environment variable. #256llm plugins --all
option for including builtin plugins. #259llm embed-db
has been renamed tollm collections
. #229- Fixed bug where
llm embed -c
option was treated as a filepath, not a string. Thanks, mhalle. #263
0.10a0
0.9
The big new feature in this release is support for embeddings.
Embedding models take a piece of text - a word, sentence, paragraph or even a whole article, and convert that into an array of floating point numbers. #185
This embedding vector can be thought of as representing a position in many-dimensional-space, where the distance between two vectors represents how semantically similar they are to each other within the content of a language model.
Embeddings can be used to find related documents, and also to implement semantic search - where a user can search for a phrase and get back results that are semantically similar to that phrase even if they do not share any exact keywords.
LLM now provides both CLI and Python APIs for working with embeddings. Embedding models are defined by plugins, so you can install additional models using the plugins mechanism.
The first two embedding models supported by LLM are:
- OpenAI's ada-002 embedding model, available via an inexpensive API if you set an OpenAI key using
llm keys set openai
. - The sentence-transformers family of models, available via the new llm-sentence-transformers plugin.
See Embedding with the CLI for detailed instructions on working with embeddings using LLM.
The new commands for working with embeddings are:
- llm embed - calculate embeddings for content and return them to the console or store them in a SQLite database.
- llm embed-multi - run bulk embeddings for multiple strings, using input from a CSV, TSV or JSON file, data from a SQLite database or data found by scanning the filesystem. #215
- llm similar - run similarity searches against your stored embeddings - starting with a search phrase or finding content related to a previously stored vector. #190
- llm embed-models - list available embedding models.
- llm embed-db - commands for inspecting and working with the default embeddings SQLite database.
There's also a new llm.Collection class for creating and searching collections of embedding from Python code, and a llm.get_embedding_model() interface for embedding strings directly. #191
0.9a1
0.9a0
0.8.1
0.8
- The output format for
llm logs
has changed. Previously it was JSON - it's now a much more readable Markdown format suitable for pasting into other documents. #160- The new
llm logs --json
option can be used to get the old JSON format. - Pass
llm logs --conversation ID
or--cid ID
to see the full logs for a specific conversation.
- The new
- You can now combine piped input and a prompt in a single command:
cat script.py | llm 'explain this code'
. This works even for models that do not support system prompts. #153 - Additional OpenAI-compatible models can now be configured with custom HTTP headers. This enables platforms such as openrouter.ai to be used with LLM, which can provide Claude access even without an Anthropic API key.
- Keys set in
keys.json
are now used in preference to environment variables. #158 - The documentation now includes a plugin directory listing all available plugins for LLM. #173
- New related tools section in the documentation describing
ttok
,strip-tags
andsymbex
. #111 - The
llm models
,llm aliases
andllm templates
commands now default to running the same command asllm models list
andllm aliases list
andllm templates list
. #167 - New
llm keys
(akallm keys list
) command for listing the names of all configured keys. #174 - Two new Python API functions,
llm.set_alias(alias, model_id)
andllm.remove_alias(alias)
can be used to configure aliases from within Python code. #154 - LLM is now compatible with both Pydantic 1 and Pydantic 2. This means you can install
llm
as a Python dependency in a project that depends on Pydantic 1 without running into dependency conflicts. Thanks, Chris Mungall. #147 llm.get_model(model_id)
is now documented as raisingllm.UnknownModelError
if the requested model does not exist. #155
0.7.1
0.7
The new Model aliases commands can be used to configure additional aliases for models, for example:
llm aliases set turbo gpt-3.5-turbo-16k
Now you can run the 16,000 token gpt-3.5-turbo-16k
model like this:
llm -m turbo 'An epic Greek-style saga about a cheesecake that builds a SQL database from scratch'
Use llm aliases list
to see a list of aliases and llm aliases remove turbo
to remove one again. #151
Notable new plugins
- llm-mlc can run local models released by the MLC project, including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
- llm-llama-cpp uses llama.cpp to run models published in the GGML format. See Run Llama 2 on your own Mac using LLM and Homebrew for more details.
Also in this release
- OpenAI models now have min and max validation on their floating point options. Thanks, Pavel Král. #115
- Fix for bug where
llm templates list
raised an error if a template had an empty prompt. Thanks, Sherwin Daganato. #132 - Fixed bug in
llm install --editable
option which prevented installation of.[test]
. #136 llm install --no-cache-dir
and--force-reinstall
options. #146