Skip to content

Commit

Permalink
SDK regeneration
Browse files Browse the repository at this point in the history
  • Loading branch information
fern-api[bot] committed Nov 27, 2024
1 parent 756515a commit ee4c2e7
Show file tree
Hide file tree
Showing 14 changed files with 313 additions and 1,886 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@ dist/
__pycache__/
poetry.toml
.ruff_cache/
.venv/
1,946 changes: 148 additions & 1,798 deletions poetry.lock

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "cohere"
version = "5.11.4"
version = "5.12.0"
description = ""
readme = "README.md"
authors = []
Expand Down
71 changes: 50 additions & 21 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -2365,6 +2365,7 @@ response = client.v2.chat_stream(
),
)
],
strict_tools=True,
documents=["string"],
citation_options=CitationOptions(
mode="FAST",
Expand All @@ -2381,6 +2382,7 @@ response = client.v2.chat_stream(
p=1.1,
return_prompt=True,
logprobs=True,
stream=True,
)
for chunk in response:
yield chunk
Expand Down Expand Up @@ -2422,6 +2424,19 @@ A list of available tools (functions) that the model may suggest invoking before
When `tools` is passed (without `tool_results`), the `text` content in the response will be empty and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.


</dd>
</dl>

<dl>
<dd>

**strict_tools:** `typing.Optional[bool]`

When set to `true`, tool calls in the Assistant message will be forced to follow the tool definition strictly. Learn more in the [Strict Tools guide](https://docs.cohere.com/docs/structured-outputs-json#structured-outputs-tools).

**Note**: The first few requests with a new set of tools will take longer to process.


</dd>
</dl>

Expand Down Expand Up @@ -2546,7 +2561,7 @@ Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty

**k:** `typing.Optional[float]`

Ensures that only the top `k` most likely tokens are considered for generation at each step. When `k` is set to `0`, k-sampling is disabled.
Ensures that only the top `k` most likely tokens are considered for generation at each step. When `k` is set to `0`, k-sampling is disabled.
Defaults to `0`, min value of `0`, max value of `500`.


Expand Down Expand Up @@ -2576,7 +2591,7 @@ Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
<dl>
<dd>

**logprobs:** `typing.Optional[bool]`Whether to return the log probabilities of the generated tokens. Defaults to false.
**logprobs:** `typing.Optional[bool]`Defaults to `false`. When set to `true`, the log probabilities of the generated tokens will be included in the response.


</dd>
Expand Down Expand Up @@ -2640,6 +2655,7 @@ client.v2.chat(
content="messages",
)
],
stream=False,
)

```
Expand Down Expand Up @@ -2679,6 +2695,19 @@ A list of available tools (functions) that the model may suggest invoking before
When `tools` is passed (without `tool_results`), the `text` content in the response will be empty and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.


</dd>
</dl>

<dl>
<dd>

**strict_tools:** `typing.Optional[bool]`

When set to `true`, tool calls in the Assistant message will be forced to follow the tool definition strictly. Learn more in the [Strict Tools guide](https://docs.cohere.com/docs/structured-outputs-json#structured-outputs-tools).

**Note**: The first few requests with a new set of tools will take longer to process.


</dd>
</dl>

Expand Down Expand Up @@ -2803,7 +2832,7 @@ Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty

**k:** `typing.Optional[float]`

Ensures that only the top `k` most likely tokens are considered for generation at each step. When `k` is set to `0`, k-sampling is disabled.
Ensures that only the top `k` most likely tokens are considered for generation at each step. When `k` is set to `0`, k-sampling is disabled.
Defaults to `0`, min value of `0`, max value of `500`.


Expand Down Expand Up @@ -2833,7 +2862,7 @@ Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
<dl>
<dd>

**logprobs:** `typing.Optional[bool]`Whether to return the log probabilities of the generated tokens. Defaults to false.
**logprobs:** `typing.Optional[bool]`Defaults to `false`. When set to `true`, the log probabilities of the generated tokens will be included in the response.


</dd>
Expand Down Expand Up @@ -3057,7 +3086,15 @@ client.v2.rerank(
<dl>
<dd>

**model:** `str` — The identifier of the model to use, one of : `rerank-english-v3.0`, `rerank-multilingual-v3.0`, `rerank-english-v2.0`, `rerank-multilingual-v2.0`
**model:** `str`

The identifier of the model to use.

Supported models:
- `rerank-english-v3.0`
- `rerank-multilingual-v3.0`
- `rerank-english-v2.0`
- `rerank-multilingual-v2.0`

</dd>
</dl>
Expand All @@ -3073,30 +3110,22 @@ client.v2.rerank(
<dl>
<dd>

**documents:** `typing.Sequence[V2RerankRequestDocumentsItem]`

A list of document objects or strings to rerank.
If a document is provided the text fields is required and all other fields will be preserved in the response.

The total max chunks (length of documents * max_chunks_per_doc) must be less than 10000.
**documents:** `typing.Sequence[str]`

We recommend a maximum of 1,000 documents for optimal endpoint performance.

</dd>
</dl>
A list of texts that will be compared to the `query`.
For optimal performance we recommend against sending more than 1,000 documents in a single request.

<dl>
<dd>
**Note**: long documents will automatically be truncated to the value of `max_tokens_per_doc`.

**top_n:** `typing.Optional[int]` — The number of most relevant documents or indices to return, defaults to the length of the documents
**Note**: structured data should be formatted as YAML strings for best performance.

</dd>
</dl>

<dl>
<dd>

**rank_fields:** `typing.Optional[typing.Sequence[str]]`If a JSON object is provided, you can specify which keys you would like to have considered for reranking. The model will rerank based on order of the fields passed in (i.e. rank_fields=['title','author','text'] will rerank using the values in title, author, text sequentially. If the length of title, author, and text exceeds the context length of the model, the chunking will not re-consider earlier fields). If not provided, the model will use the default text field for ranking.
**top_n:** `typing.Optional[int]`Limits the number of returned rerank results to the specified value. If not passed, all the rerank results will be returned.

</dd>
</dl>
Expand All @@ -3115,7 +3144,7 @@ We recommend a maximum of 1,000 documents for optimal endpoint performance.
<dl>
<dd>

**max_chunks_per_doc:** `typing.Optional[int]`The maximum number of chunks to produce internally from a document
**max_tokens_per_doc:** `typing.Optional[int]`Defaults to `4096`. Long documents will be automatically truncated to the specified number of tokens.

</dd>
</dl>
Expand Down Expand Up @@ -5043,7 +5072,7 @@ client.finetuning.update_finetuned_model(
<dl>
<dd>

**last_used:** `typing.Optional[dt.datetime]` — Timestamp for the latest request to this fine-tuned model.
**last_used:** `typing.Optional[dt.datetime]`Deprecated: Timestamp for the latest request to this fine-tuned model.

</dd>
</dl>
Expand Down
4 changes: 2 additions & 2 deletions src/cohere/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@
DatasetType,
DatasetValidationStatus,
DebugStreamedChatResponse,
DebugStreamedChatResponseV2,
DeleteConnectorResponse,
DetokenizeResponse,
Document,
Expand Down Expand Up @@ -263,7 +264,6 @@
V2ChatStreamRequestDocumentsItem,
V2ChatStreamRequestSafetyMode,
V2EmbedRequestTruncate,
V2RerankRequestDocumentsItem,
V2RerankResponse,
V2RerankResponseResultsItem,
V2RerankResponseResultsItemDocument,
Expand Down Expand Up @@ -391,6 +391,7 @@
"DatasetsGetUsageResponse",
"DatasetsListResponse",
"DebugStreamedChatResponse",
"DebugStreamedChatResponseV2",
"DeleteConnectorResponse",
"DetokenizeResponse",
"Document",
Expand Down Expand Up @@ -528,7 +529,6 @@
"V2ChatStreamRequestDocumentsItem",
"V2ChatStreamRequestSafetyMode",
"V2EmbedRequestTruncate",
"V2RerankRequestDocumentsItem",
"V2RerankResponse",
"V2RerankResponseResultsItem",
"V2RerankResponseResultsItemDocument",
Expand Down
2 changes: 1 addition & 1 deletion src/cohere/core/client_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def get_headers(self) -> typing.Dict[str, str]:
headers: typing.Dict[str, str] = {
"X-Fern-Language": "Python",
"X-Fern-SDK-Name": "cohere",
"X-Fern-SDK-Version": "5.11.4",
"X-Fern-SDK-Version": "5.12.0",
}
if self._client_name is not None:
headers["X-Client-Name"] = self._client_name
Expand Down
4 changes: 2 additions & 2 deletions src/cohere/finetuning/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,7 @@ def update_finetuned_model(
Timestamp for the completed fine-tuning.
last_used : typing.Optional[dt.datetime]
Timestamp for the latest request to this fine-tuned model.
Deprecated: Timestamp for the latest request to this fine-tuned model.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Expand Down Expand Up @@ -1468,7 +1468,7 @@ async def update_finetuned_model(
Timestamp for the completed fine-tuning.
last_used : typing.Optional[dt.datetime]
Timestamp for the latest request to this fine-tuned model.
Deprecated: Timestamp for the latest request to this fine-tuned model.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Expand Down
2 changes: 1 addition & 1 deletion src/cohere/finetuning/finetuning/types/finetuned_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ class FinetunedModel(UncheckedBaseModel):

last_used: typing.Optional[dt.datetime] = pydantic.Field(default=None)
"""
read-only. Timestamp for the latest request to this fine-tuned model.
read-only. Deprecated: Timestamp for the latest request to this fine-tuned model.
"""

if IS_PYDANTIC_V2:
Expand Down
10 changes: 10 additions & 0 deletions src/cohere/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,7 @@
from .detokenize_response import DetokenizeResponse
from .document import Document
from .document_content import DocumentContent
from .document_source import DocumentSource
from .embed_by_type_response import EmbedByTypeResponse
from .embed_by_type_response_embeddings import EmbedByTypeResponseEmbeddings
from .embed_floats_response import EmbedFloatsResponse
Expand Down Expand Up @@ -188,6 +189,7 @@
ContentDeltaStreamedChatResponseV2,
ContentEndStreamedChatResponseV2,
ContentStartStreamedChatResponseV2,
DebugStreamedChatResponseV2,
MessageEndStreamedChatResponseV2,
MessageStartStreamedChatResponseV2,
StreamedChatResponseV2,
Expand All @@ -200,8 +202,12 @@
from .summarize_request_format import SummarizeRequestFormat
from .summarize_request_length import SummarizeRequestLength
from .summarize_response import SummarizeResponse
from .system_message import SystemMessage
from .system_message_content import SystemMessageContent
from .system_message_content_item import SystemMessageContentItem, TextSystemMessageContentItem
from .text_content import TextContent
from .text_response_format import TextResponseFormat
from .text_response_format_v2 import TextResponseFormatV2
from .tokenize_response import TokenizeResponse
from .too_many_requests_error_body import TooManyRequestsErrorBody
from .tool import Tool
Expand All @@ -210,17 +216,20 @@
from .tool_call_v2 import ToolCallV2
from .tool_call_v2function import ToolCallV2Function
from .tool_content import DocumentToolContent, TextToolContent, ToolContent
from .tool_message import ToolMessage
from .tool_message_v2 import ToolMessageV2
from .tool_message_v2content import ToolMessageV2Content
from .tool_parameter_definitions_value import ToolParameterDefinitionsValue
from .tool_result import ToolResult
from .tool_source import ToolSource
from .tool_v2 import ToolV2
from .tool_v2function import ToolV2Function
from .unprocessable_entity_error_body import UnprocessableEntityErrorBody
from .update_connector_response import UpdateConnectorResponse
from .usage import Usage
from .usage_billed_units import UsageBilledUnits
from .usage_tokens import UsageTokens
from .user_message import UserMessage
from .user_message_content import UserMessageContent

__all__ = [
Expand Down Expand Up @@ -329,6 +338,7 @@
"DatasetType",
"DatasetValidationStatus",
"DebugStreamedChatResponse",
"DebugStreamedChatResponseV2",
"DeleteConnectorResponse",
"DetokenizeResponse",
"Document",
Expand Down
18 changes: 18 additions & 0 deletions src/cohere/types/streamed_chat_response_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,23 @@ class Config:
extra = pydantic.Extra.allow


class DebugStreamedChatResponseV2(UncheckedBaseModel):
"""
StreamedChatResponse is returned in streaming mode (specified with `stream=True` in the request).
"""

type: typing.Literal["debug"] = "debug"
prompt: typing.Optional[str] = None

if IS_PYDANTIC_V2:
model_config: typing.ClassVar[pydantic.ConfigDict] = pydantic.ConfigDict(extra="allow") # type: ignore # Pydantic v2
else:

class Config:
smart_union = True
extra = pydantic.Extra.allow


StreamedChatResponseV2 = typing_extensions.Annotated[
typing.Union[
MessageStartStreamedChatResponseV2,
Expand All @@ -226,6 +243,7 @@ class Config:
CitationStartStreamedChatResponseV2,
CitationEndStreamedChatResponseV2,
MessageEndStreamedChatResponseV2,
DebugStreamedChatResponseV2,
],
UnionMetadata(discriminant="type"),
]
2 changes: 0 additions & 2 deletions src/cohere/v2/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
V2ChatStreamRequestDocumentsItem,
V2ChatStreamRequestSafetyMode,
V2EmbedRequestTruncate,
V2RerankRequestDocumentsItem,
V2RerankResponse,
V2RerankResponseResultsItem,
V2RerankResponseResultsItemDocument,
Expand All @@ -18,7 +17,6 @@
"V2ChatStreamRequestDocumentsItem",
"V2ChatStreamRequestSafetyMode",
"V2EmbedRequestTruncate",
"V2RerankRequestDocumentsItem",
"V2RerankResponse",
"V2RerankResponseResultsItem",
"V2RerankResponseResultsItemDocument",
Expand Down
Loading

0 comments on commit ee4c2e7

Please sign in to comment.