You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we use Haystack to provide various LLM providers; however, we found it's still too hard for the community to contribute to various LLMs. We plan to use litellm instead; it provides more functionalities such as fallback, and it provides openai api compatible interface for all llms. We think it's easier for the community's contribution
The text was updated successfully, but these errors were encountered:
We plan to replace implementation of LLMProvider and EmbedderProvider using litellm. However, we think we can focus on replacing LLMProvider first.
For the get_generator method in each LLMProvider, we can implement a higher-order function that accepts model params and returns a function. when we invoke the returned function, it executes the llm generation api, which mean acompletion in litellm
For all llm api calls, we need to use async version
We can finish openai, openai-compatible, and ollama first?
Currently we use Haystack to provide various LLM providers; however, we found it's still too hard for the community to contribute to various LLMs. We plan to use litellm instead; it provides more functionalities such as fallback, and it provides openai api compatible interface for all llms. We think it's easier for the community's contribution
The text was updated successfully, but these errors were encountered: