Skip to content

Commit

Permalink
fix(ai-proxy): remove model options' stream default value (#12013)
Browse files Browse the repository at this point in the history
  • Loading branch information
shreemaan-abhishek authored Mar 3, 2025
1 parent 88164f4 commit 4ba442b
Show file tree
Hide file tree
Showing 3 changed files with 2 additions and 3 deletions.
1 change: 0 additions & 1 deletion apisix/plugins/ai-proxy/schema.lua
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,6 @@ local model_options_schema = {
stream = {
description = "Stream response by SSE",
type = "boolean",
default = false,
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion docs/en/latest/plugins/ai-proxy-multi.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Proxying requests to OpenAI is supported now. Other LLM services will be support
| provider.options.output_cost | No | number | Cost per 1M tokens in the AI-generated output. Minimum is 0. | |
| provider.options.temperature | No | number | Defines the model's temperature (0.0 - 5.0) for randomness in responses. | |
| provider.options.top_p | No | number | Defines the top-p probability mass (0 - 1) for nucleus sampling. | |
| provider.options.stream | No | boolean | Enables streaming responses via SSE. | false |
| provider.options.stream | No | boolean | Enables streaming responses via SSE. | |
| provider.override.endpoint | No | string | Custom host override for the AI provider. | |
| passthrough | No | boolean | If true, requests are forwarded without processing. | false |
| timeout | No | integer | Request timeout in milliseconds (1-60000). | 3000 |
Expand Down
2 changes: 1 addition & 1 deletion docs/en/latest/plugins/ai-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Proxying requests to OpenAI is supported now. Other LLM services will be support
| model.options.output_cost | No | Number | Cost per 1M tokens in the output of the AI. Minimum: 0 |
| model.options.temperature | No | Number | Matching temperature for models. Range: 0.0 - 5.0 |
| model.options.top_p | No | Number | Top-p probability mass. Range: 0 - 1 |
| model.options.stream | No | Boolean | Stream response by SSE. Default: false |
| model.options.stream | No | Boolean | Stream response by SSE. |
| override.endpoint | No | String | Override the endpoint of the AI provider |
| passthrough | No | Boolean | If enabled, the response from LLM will be sent to the upstream. Default: false |
| timeout | No | Integer | Timeout in milliseconds for requests to LLM. Range: 1 - 60000. Default: 3000 |
Expand Down

0 comments on commit 4ba442b

Please sign in to comment.