Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support seperate thinking process in the stream chunk #196

Open
arvinxx opened this issue Jan 24, 2025 · 1 comment
Open

support seperate thinking process in the stream chunk #196

arvinxx opened this issue Jan 24, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@arvinxx
Copy link

arvinxx commented Jan 24, 2025

We are intergating the ollama with DeepSeek R1, and we find current Ollama implement is to add CoT in the content. But with official implement, it should be in another field param like reasoning_content.

Image

as for the gemini-flash-thinking model ,they also support a seperate field to get the thinking content: https://ai.google.dev/gemini-api/docs/thinking#work-thoughts

@ecker00
Copy link

ecker00 commented Mar 10, 2025

In the mean time a cheap workaround:

const [think, message] =  res.response.split('</think>').map((s) => s.trim());

@BruceMacD BruceMacD added the enhancement New feature or request label Mar 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants