You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Presently, GPTDiscord relies on the OpenAI GPT model, incurring costs through its API usage. For users looking for a cost-effective solution, this might pose a challenge.
Describe the solution you'd like
I suggest integrating the Ollama model alongside the existing OpenAI GPT model in GPTDiscord. Ollama is a high-performing and cost-free alternative, requiring local machine execution. This would give users the flexibility to choose between the OpenAI GPT model and the Ollama model based on their preferences and budget constraints.
Describe alternatives you've considered
An alternative could be sticking to the current setup with only the OpenAI GPT model. However, incorporating the Ollama model adds a valuable cost-effective option for users.
Additional context
I understand that introducing a non-GPT model like Ollama might seem like a deviation. I apologize for any confusion this might cause. However, the aim is to provide users with an additional, cost-effective option without compromising on performance. Ollama's integration aligns with the ethos of an open-source project, offering users flexibility and choice.
but it can be understandable why GPTDiscord maintainer / contributor don't want to implement this, since it will result in more (and unnecessary) issues which will cost you time and effort. so i guess the middle ground is to do it on your own by forking the repository and change the parts to direct the bot into your local model, as long as your LLM server follows OpenAI API format it should be reasonable / something possible to do.
Is your feature request related to a problem? Please describe.
Presently, GPTDiscord relies on the OpenAI GPT model, incurring costs through its API usage. For users looking for a cost-effective solution, this might pose a challenge.
Describe the solution you'd like
I suggest integrating the Ollama model alongside the existing OpenAI GPT model in GPTDiscord. Ollama is a high-performing and cost-free alternative, requiring local machine execution. This would give users the flexibility to choose between the OpenAI GPT model and the Ollama model based on their preferences and budget constraints.
Describe alternatives you've considered
An alternative could be sticking to the current setup with only the OpenAI GPT model. However, incorporating the Ollama model adds a valuable cost-effective option for users.
Additional context
I understand that introducing a non-GPT model like Ollama might seem like a deviation. I apologize for any confusion this might cause. However, the aim is to provide users with an additional, cost-effective option without compromising on performance. Ollama's integration aligns with the ethos of an open-source project, offering users flexibility and choice.
Link to Ollama for additional information
The text was updated successfully, but these errors were encountered: