-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the use of self-hosted LLMs #36
Conversation
@artivis thanks 👍 you are the 1st contributor 🥇 i will definitely look at the fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this actually is a part of #21.
what this is trying to do is to be agnostic from some OpenAI default and required setting so that https://ollama.com/ can be configured and used as backend AI system.
for doing this, IMO we can take this fix to support but there are things need to be done.
- Documentation update. all of the docs are currently related to OpenAI, documentation needs to be more generic and supported AI backend system.
- Refactor code base to be agnostic from OpenAI only. https://ollama.com/ can be a good example for this.
Signed-off-by: Tomoya Fujita <[email protected]>
@artivis i added the minor patch to fix the problem with verification.sh script. after building, verification for all container images, i will take this in. |
This PR introduces several fixes and features that enable the use of other, open-ai api compatible, LLMs such as those served by the Ollama framework.
Most notably:
--dry-run
option toexec
that prints the answer without executing it