Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

litellm support #780

Open
cyyeh opened this issue Oct 21, 2024 · 3 comments
Open

litellm support #780

cyyeh opened this issue Oct 21, 2024 · 3 comments
Assignees

Comments

@cyyeh
Copy link
Member

cyyeh commented Oct 21, 2024

Currently we use Haystack to provide various LLM providers; however, we found it's still too hard for the community to contribute to various LLMs. We plan to use litellm instead; it provides more functionalities such as fallback, and it provides openai api compatible interface for all llms. We think it's easier for the community's contribution

@cyyeh
Copy link
Member Author

cyyeh commented Oct 21, 2024

This task is scheduled to assign to the contributor Gomaa

@cyyeh
Copy link
Member Author

cyyeh commented Oct 24, 2024

@MGomaa435

implementation plan doc

  1. We plan to replace implementation of LLMProvider and EmbedderProvider using litellm. However, we think we can focus on replacing LLMProvider first.
  2. For the get_generator method in each LLMProvider, we can implement a higher-order function that accepts model params and returns a function. when we invoke the returned function, it executes the llm generation api, which mean acompletion in litellm
  3. For all llm api calls, we need to use async version
  4. We can finish openai, openai-compatible, and ollama first?

@MGomaa435
Copy link
Contributor

Hi Jimmy, my pleasure to support in this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants