Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Integration of Ollama Model #440

Open
x-TheFox opened this issue Nov 29, 2023 · 1 comment
Open

Feature Request: Integration of Ollama Model #440

x-TheFox opened this issue Nov 29, 2023 · 1 comment

Comments

@x-TheFox
Copy link

Is your feature request related to a problem? Please describe.
Presently, GPTDiscord relies on the OpenAI GPT model, incurring costs through its API usage. For users looking for a cost-effective solution, this might pose a challenge.

Describe the solution you'd like
I suggest integrating the Ollama model alongside the existing OpenAI GPT model in GPTDiscord. Ollama is a high-performing and cost-free alternative, requiring local machine execution. This would give users the flexibility to choose between the OpenAI GPT model and the Ollama model based on their preferences and budget constraints.

Describe alternatives you've considered
An alternative could be sticking to the current setup with only the OpenAI GPT model. However, incorporating the Ollama model adds a valuable cost-effective option for users.

Additional context
I understand that introducing a non-GPT model like Ollama might seem like a deviation. I apologize for any confusion this might cause. However, the aim is to provide users with an additional, cost-effective option without compromising on performance. Ollama's integration aligns with the ethos of an open-source project, offering users flexibility and choice.

Link to Ollama for additional information

@Kisaragi-ng
Copy link

I think this request could be better for having support API endpoints that follow OpenAI API format, for example are like LM Studio https://lmstudio.ai/docs/local-server or text-generation-webui https://github.com/oobabooga/text-generation-webui . the implementation could be into we able to set custom endpoint url.

but it can be understandable why GPTDiscord maintainer / contributor don't want to implement this, since it will result in more (and unnecessary) issues which will cost you time and effort. so i guess the middle ground is to do it on your own by forking the repository and change the parts to direct the bot into your local model, as long as your LLM server follows OpenAI API format it should be reasonable / something possible to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants