Skip to content
This repository was archived by the owner on Feb 2, 2025. It is now read-only.
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: simonw/llm-claude-3
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0.9
Choose a base ref
...
head repository: simonw/llm-claude-3
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
  • 7 commits
  • 6 files changed
  • 1 contributor

Commits on Nov 20, 2024

  1. Use response.set_usage(), closes #29

    simonw committed Nov 20, 2024
    Copy the full SHA
    fd898ff View commit details
  2. Release 0.10a0

    simonw committed Nov 20, 2024
    Copy the full SHA
    ec5b3bd View commit details

Commits on Dec 2, 2024

  1. Depend on LLM 0.19, refs #29

    simonw committed Dec 2, 2024
    Copy the full SHA
    896cd62 View commit details
  2. Release 0.10

    Refs #29
    simonw authored Dec 2, 2024
    Copy the full SHA
    c62bf24 View commit details

Commits on Dec 17, 2024

  1. claude-3-5-sonnet-20240620 supports PDF, closes #30

    simonw committed Dec 17, 2024
    Copy the full SHA
    8c08d56 View commit details

Commits on Feb 2, 2025

  1. llm-claude-3 rename package for PyPI

    simonw committed Feb 2, 2025
    Copy the full SHA
    9d933ff View commit details
  2. Plugin has been renamed

    Closes #31
    simonw authored Feb 2, 2025
    Copy the full SHA
    5b42bfc View commit details
Showing with 54 additions and 70 deletions.
  1. +4 −63 README.md
  2. +11 −2 llm_claude_3.py
  3. +5 −0 pypi-rename-package/llm-claude-3/README.md
  4. +23 −0 pypi-rename-package/llm-claude-3/setup.py
  5. +2 −2 pyproject.toml
  6. +9 −3 tests/test_claude_3.py
67 changes: 4 additions & 63 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,68 +1,9 @@
# llm-claude-3

[![PyPI](https://img.shields.io/pypi/v/llm-claude-3.svg)](https://pypi.org/project/llm-claude-3/)
[![Changelog](https://img.shields.io/github/v/release/simonw/llm-claude-3?include_prereleases&label=changelog)](https://github.com/simonw/llm-claude-3/releases)
[![Tests](https://github.com/simonw/llm-claude-3/actions/workflows/test.yml/badge.svg)](https://github.com/simonw/llm-claude-3/actions/workflows/test.yml)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/llm-claude-3/blob/main/LICENSE)
**This plugin has been renamed to [llm-anthropic](https://github.com/simonw/llm-anthropic)**. You should install that instead:

LLM access to Claude 3 by Anthropic
llm install llm-anthropic

## Installation
Installing `llm-claude-3` will cause that plugin to be installed instead.

Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-claude-3
```

## Usage

First, set [an API key](https://console.anthropic.com/settings/keys) for Claude 3:
```bash
llm keys set claude
# Paste key here
```

You can also set the key in the environment variable `ANTHROPIC_API_KEY`

Run `llm models` to list the models, and `llm models --options` to include a list of their options.

Run prompts like this:
```bash
llm -m claude-3.5-sonnet 'Fun facts about pelicans'
llm -m claude-3.5-haiku 'Fun facts about armadillos'
llm -m claude-3-opus 'Fun facts about squirrels'
```
Images are supported too:
```bash
llm -m claude-3.5-sonnet 'describe this image' -a https://static.simonwillison.net/static/2024/pelicans.jpg
llm -m claude-3-haiku 'extract text' -a page.png
```

## Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-claude-3
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```

This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Anthropic API responses for the tests.

If you add a new test that calls the API you can capture the API response like this:
```bash
PYTEST_ANTHROPIC_API_KEY="$(llm keys get claude)" pytest --record-mode once
```
You will need to have stored a valid Anthropic API key using this command first:
```bash
llm keys set claude
# Paste key here
```
Here's [the README](https://github.com/simonw/llm-claude-3/blob/0.10/README.md) for the last release of this package, version 0.10.
13 changes: 11 additions & 2 deletions llm_claude_3.py
Original file line number Diff line number Diff line change
@@ -28,8 +28,8 @@ def register_models(register):
)
# 3.5 models
register(
ClaudeMessagesLong("claude-3-5-sonnet-20240620"),
AsyncClaudeMessagesLong("claude-3-5-sonnet-20240620"),
ClaudeMessagesLong("claude-3-5-sonnet-20240620", supports_pdf=True),
AsyncClaudeMessagesLong("claude-3-5-sonnet-20240620", supports_pdf=True),
)
register(
ClaudeMessagesLong("claude-3-5-sonnet-20241022", supports_pdf=True),
@@ -231,6 +231,13 @@ def build_kwargs(self, prompt, conversation):
kwargs["extra_headers"] = self.extra_headers
return kwargs

def set_usage(self, response):
usage = response.response_json.pop("usage")
if usage:
response.set_usage(
input=usage.get("input_tokens"), output=usage.get("output_tokens")
)

def __str__(self):
return "Anthropic Messages: {}".format(self.model_id)

@@ -250,6 +257,7 @@ def execute(self, prompt, stream, response, conversation):
completion = client.messages.create(**kwargs)
yield completion.content[0].text
response.response_json = completion.model_dump()
self.set_usage(response)


class ClaudeMessagesLong(ClaudeMessages):
@@ -270,6 +278,7 @@ async def execute(self, prompt, stream, response, conversation):
completion = await client.messages.create(**kwargs)
yield completion.content[0].text
response.response_json = completion.model_dump()
self.set_usage(response)


class AsyncClaudeMessagesLong(AsyncClaudeMessages):
5 changes: 5 additions & 0 deletions pypi-rename-package/llm-claude-3/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# llm-claude-3 is now llm-anthropic

This package has been renamed. Use `pip install llm-anthropic` instead.

New package: https://pypi.org/project/llm-anthropic/
23 changes: 23 additions & 0 deletions pypi-rename-package/llm-claude-3/setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
from setuptools import setup
import os

VERSION = "0.11"


def get_long_description():
with open(
os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md"),
encoding="utf8",
) as fp:
return fp.read()


setup(
name="llm-claude-3",
description="llm-claude-3 is now llm-anthropic",
long_description=get_long_description(),
long_description_content_type="text/markdown",
version=VERSION,
install_requires=["llm-anthropic"],
classifiers=["Development Status :: 7 - Inactive"],
)
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llm-claude-3"
version = "0.9"
version = "0.10"
description = "LLM access to Claude 3 by Anthropic"
readme = "README.md"
authors = [{name = "Simon Willison"}]
@@ -9,7 +9,7 @@ classifiers = [
"License :: OSI Approved :: Apache Software License"
]
dependencies = [
"llm>=0.18",
"llm>=0.19",
"anthropic>=0.39.0",
]

12 changes: 9 additions & 3 deletions tests/test_claude_3.py
Original file line number Diff line number Diff line change
@@ -30,8 +30,10 @@ def test_prompt():
"stop_reason": "end_turn",
"stop_sequence": None,
"type": "message",
"usage": {"input_tokens": 17, "output_tokens": 15},
}
assert response.input_tokens == 17
assert response.output_tokens == 15
assert response.token_details is None


@pytest.mark.vcr
@@ -50,8 +52,10 @@ async def test_async_prompt():
"stop_reason": "end_turn",
"stop_sequence": None,
"type": "message",
"usage": {"input_tokens": 17, "output_tokens": 15},
}
assert response.input_tokens == 17
assert response.output_tokens == 15
assert response.token_details is None


EXPECTED_IMAGE_TEXT = (
@@ -86,5 +90,7 @@ def test_image_prompt():
"stop_reason": "end_turn",
"stop_sequence": None,
"type": "message",
"usage": {"input_tokens": 76, "output_tokens": 75},
}
assert response.input_tokens == 76
assert response.output_tokens == 75
assert response.token_details is None