Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama Model Selection and Error Handling Enhancements #135

Closed
wants to merge 1 commit into from

Conversation

Pratik960
Copy link

-> Implemented model selection for Ollama in settings, allowing users to choose a specific model when Ollama is selected as the API provider.

-> Updated the API call to include the selected model to ensure the correct model is used.

-> Improved error handling to display error messages returned by Ollama to the user. and also solved the issue of void getting blank on error when using ollama as API provider

Note: I have not removed the comment if i need to please let me know

@Pratik960
Copy link
Author

Pratik960 commented Oct 29, 2024

Issue:- Use Ollama API to start the right model #89
@andrewpareles

@andrewpareles
Copy link
Contributor

andrewpareles commented Oct 29, 2024

I only see that you uncommented a few lines of code. This change lets users select an Ollama model now, but the selection doesn't actually do anything. Making it do something is the purpose of #89.

@Pratik960
Copy link
Author

@andrewpareles the user selected model is being passed to ollama API and here is the code for that in sendLLMMessage.ts file
is this correct?

sendLLMMessage

also i have logged the model name to see if the correct model is being passed to the ollama
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants