We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
例行检查
问题描述 在使用零一万物所有模型流式输出长文本时,输出截断,默认没有设置max_token
复现步骤 使用零一万物输出800token 以上的内容,可能会中断 预期结果 请求时为部分模型添加max_token
The text was updated successfully, but these errors were encountered:
最后返回结果:data: {"id":"****","object":"chat.completion.chunk","created":*****,"model":"yi-large","choices":[{"delta":{"content":"。"},"index":0,"finish_reason":"length"}],疑似因为长度问题,截断了输出(没有添加max_token),添加后则没有问题
Sorry, something went wrong.
01 不是和OpenAI一样的格式么? 我没有做任何处理,所以 这个max_token参数也是自己需要传递的吧?
max_token
确实需要自行传递,但客户端如果没有传递,输出内容长会中断,而OpenAI似乎不会。是否考虑为部分模型自动添加max_token参数(没有传递的情况下)。不确定实现起来是否有难度
01万物的问题,不传递默认好像512,现在不知道多少。
No branches or pull requests
例行检查
问题描述
在使用零一万物所有模型流式输出长文本时,输出截断,默认没有设置max_token
复现步骤
使用零一万物输出800token 以上的内容,可能会中断
预期结果
请求时为部分模型添加max_token
The text was updated successfully, but these errors were encountered: