Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various docs improvements #809

Merged
merged 8 commits into from
Jan 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,32 +13,32 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI

## Why use PydanticAI

:material-account-group:{ .md .middle .team-blue }&nbsp;<strong class="vertical-middle">Built by the Pydantic Team</strong><br>
* __Built by the Pydantic Team__:
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).

:fontawesome-solid-shapes:{ .md .middle .shapes-orange }&nbsp;<strong class="vertical-middle">Model-agnostic</strong><br>
* __Model-agnostic__:
Supports OpenAI, Anthropic, Gemini, Deepseek, Ollama, Groq, Cohere, and Mistral, and there is a simple interface to implement support for [other models](models.md).

:logfire-logo:{ .md .middle }&nbsp;<strong class="vertical-middle">Pydantic Logfire Integration</strong><br>
* __Pydantic Logfire Integration__:
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.

:material-shield-check:{ .md .middle .secure-green }&nbsp;<strong class="vertical-middle">Type-safe</strong><br>
* __Type-safe__:
Designed to make [type checking](agents.md#static-type-checking) as powerful and informative as possible for you.

:snake:{ .md .middle }&nbsp;<strong class="vertical-middle">Python-centric Design</strong><br>
* __Python-centric Design__:
Leverages Python's familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project.

:simple-pydantic:{ .md .middle .pydantic-pink }&nbsp;<strong class="vertical-middle">Structured Responses</strong><br>
* __Structured Responses__:
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.

:material-puzzle-plus:{ .md .middle .puzzle-purple }&nbsp;<strong class="vertical-middle">Dependency Injection System</strong><br>
* __Dependency Injection System__:
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
This is useful for testing and eval-driven iterative development.

:material-sine-wave:{ .md .middle }&nbsp;<strong class="vertical-middle">Streamed Responses</strong><br>
* __Streamed Responses__:
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.

:material-graph:{ .md .middle .graph-green }&nbsp;<strong class="vertical-middle">Graph Support</strong><br>
* __Graph Support__:
[Pydantic Graph](graph.md) provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code.

!!! example "In Beta"
Expand Down
22 changes: 21 additions & 1 deletion docs/logfire.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,9 @@ import logfire
logfire.configure()
```

The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire, including how to instrument other libraries like Pydantic, HTTPX and FastAPI.
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire,
including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/),
[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).

Since Logfire is build on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector.

Expand All @@ -79,3 +81,21 @@ To demonstrate how Logfire can let you visualise the flow of a PydanticAI run, h
We can also query data with SQL in Logfire to monitor the performance of an application. Here's a real world example of using Logfire to monitor PydanticAI runs inside Logfire itself:

![Logfire monitoring PydanticAI](img/logfire-monitoring-pydanticai.png)

### Monitoring HTTPX Requests

In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration.

Instrumentation is as easy as adding the following three lines to your application:

```py {title="instrument_httpx.py" test="skip" lint="skip"}
...
import logfire
logfire.configure() # (1)!
logfire.instrument_httpx() # (2)!
...
```
```

In particular, this can help you to trace specific requests, responses, and headers which might be of particular interest
if you're using a custom `httpx` client in your model.
16 changes: 10 additions & 6 deletions docs/message-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ print(result1.data)

result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
print(result2.data)
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
Expand Down Expand Up @@ -210,7 +210,7 @@ print(result2.all_messages())
ModelResponse(
parts=[
TextPart(
content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
part_kind='text',
)
],
Expand All @@ -229,7 +229,9 @@ Since messages are defined by simple dataclasses, you can manually create and ma

The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.

```python
In the example below, we reuse the message from the first agent run, which uses the `openai:gpt-4o` model, in a second agent run using the `google-gla:gemini-1.5-pro` model.

```python {title="Reusing messages with a different model" hl_lines="11"}
from pydantic_ai import Agent

agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
Expand All @@ -239,10 +241,12 @@ print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.

result2 = agent.run_sync(
'Explain?', model='gemini-1.5-pro', message_history=result1.new_messages()
'Explain?',
model='google-gla:gemini-1.5-pro',
message_history=result1.new_messages(),
)
print(result2.data)
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.

print(result2.all_messages())
"""
Expand Down Expand Up @@ -286,7 +290,7 @@ print(result2.all_messages())
ModelResponse(
parts=[
TextPart(
content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
part_kind='text',
)
],
Expand Down
6 changes: 6 additions & 0 deletions docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,9 @@ Note: This fix also applies to Google Colab.
### `UserError: API key must be provided or set in the [MODEL]_API_KEY environment variable`

If you're running into issues with setting the API key for your model, visit the [Models](models.md) page to learn more about how to set an environment variable and/or pass in an `api_key` argument.

## Monitoring HTTPX Requests

You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime.

It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above.
5 changes: 5 additions & 0 deletions pydantic_ai_slim/pydantic_ai/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ class ModelSettings(TypedDict, total=False):
"""Whether to allow parallel tool calls.

Supported by:

* OpenAI (some models, not o1)
* Groq
* Anthropic
Expand All @@ -89,6 +90,7 @@ class ModelSettings(TypedDict, total=False):
"""The random seed to use for the model, theoretically allowing for deterministic results.

Supported by:

* OpenAI
* Groq
* Cohere
Expand All @@ -99,6 +101,7 @@ class ModelSettings(TypedDict, total=False):
"""Penalize new tokens based on whether they have appeared in the text so far.

Supported by:

* OpenAI
* Groq
* Cohere
Expand All @@ -110,6 +113,7 @@ class ModelSettings(TypedDict, total=False):
"""Penalize new tokens based on their existing frequency in the text so far.

Supported by:

* OpenAI
* Groq
* Cohere
Expand All @@ -121,6 +125,7 @@ class ModelSettings(TypedDict, total=False):
"""Modify the likelihood of specified tokens appearing in the completion.

Supported by:

* OpenAI
* Groq
"""
Expand Down
2 changes: 1 addition & 1 deletion tests/test_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def rich_prompt_ask(prompt: str, *_args: Any, **_kwargs: Any) -> str:
'The weather in West London is raining, while in Wiltshire it is sunny.'
),
'Tell me a joke.': 'Did you hear about the toothpaste scandal? They called it Colgate.',
'Explain?': 'This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
'Explain?': 'This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
'What is the capital of France?': 'Paris',
'What is the capital of Italy?': 'Rome',
'What is the capital of the UK?': 'London',
Expand Down