Skip to content

Latest commit

 

History

History
125 lines (92 loc) · 32.7 KB

README.md

File metadata and controls

125 lines (92 loc) · 32.7 KB

Agents

(agents)

Overview

Agents API.

Available Operations

complete

Agents Completion

Example Usage

from mistralai import Mistral
import os

s = Mistral(
    api_key=os.getenv("MISTRAL_API_KEY", ""),
)

res = s.agents.complete(messages=[
    {
        "content": "<value>",
    },
], agent_id="<value>")

if res is not None:
    # handle response
    pass

Parameters

Parameter Type Required Description Example
messages List[models.AgentsCompletionRequestMessages] ✔️ The prompt(s) to generate completions for, encoded as a list of dict with role and content. [
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence."
}
]
agent_id str ✔️ The ID of the agent to use for this completion.
max_tokens OptionalNullable[int] The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
min_tokens OptionalNullable[int] The minimum number of tokens to generate in the completion.
stream Optional[bool] Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
stop Optional[models.AgentsCompletionRequestStop] Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
random_seed OptionalNullable[int] The seed to use for random sampling. If set, different calls will generate deterministic results.
response_format Optional[models.ResponseFormat] N/A
tools List[models.Tool] N/A
tool_choice Optional[models.AgentsCompletionRequestToolChoice] N/A
retries Optional[utils.RetryConfig] Configuration to override the default retry behavior of the client.

Response

models.ChatCompletionResponse

Errors

Error Object Status Code Content Type
models.HTTPValidationError 422 application/json
models.SDKError 4xx-5xx /

stream

Mistral AI provides the ability to stream responses back to a client in order to allow partial results for certain requests. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.

Example Usage

from mistralai import Mistral
import os

s = Mistral(
    api_key=os.getenv("MISTRAL_API_KEY", ""),
)

res = s.agents.stream(messages=[
    {
        "content": [
            {
                "image_url": {
                    "url": "http://possible-veal.org",
                },
            },
        ],
    },
], agent_id="<value>")

if res is not None:
    for event in res:
        # handle event
        print(event, flush=True)

Parameters

Parameter Type Required Description Example
messages List[models.AgentsCompletionStreamRequestMessages] ✔️ The prompt(s) to generate completions for, encoded as a list of dict with role and content. [
{
"role": "user",
"content": "Who is the best French painter? Answer in one short sentence."
}
]
agent_id str ✔️ The ID of the agent to use for this completion.
max_tokens OptionalNullable[int] The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
min_tokens OptionalNullable[int] The minimum number of tokens to generate in the completion.
stream Optional[bool] N/A
stop Optional[models.AgentsCompletionStreamRequestStop] Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
random_seed OptionalNullable[int] The seed to use for random sampling. If set, different calls will generate deterministic results.
response_format Optional[models.ResponseFormat] N/A
tools List[models.Tool] N/A
tool_choice Optional[models.AgentsCompletionStreamRequestToolChoice] N/A
retries Optional[utils.RetryConfig] Configuration to override the default retry behavior of the client.

Response

Union[Generator[models.CompletionEvent, None, None], AsyncGenerator[models.CompletionEvent, None]]

Errors

Error Object Status Code Content Type
models.HTTPValidationError 422 application/json
models.SDKError 4xx-5xx /