-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎅 I WISH LITELLM HAD... #361
Comments
[LiteLLM Client] Add new models via UI Thinking aloud it seems intuitive that you'd be able to add new models / remap completion calls to different models via UI. Unsure on real problem though. |
User / API Access Management Different users have access to different models. It'd be helpful if there was a way to maybe leverage the BudgetManager to gate access. E.g. GPT-4 is expensive, i don't want to expose that to my free users but i do want my paid users to be able to use it. |
cc: @yujonglee @WilliamEspegren @zakhar-kogan @ishaan-jaff @PhucTranThanh feel free to add any requests / ideas here. |
[Spend Dashboard] View analytics for spend per llm and per user
|
Auto select the best LLM for a given task If it's a simple task like responding to "hello" litlellm should auto-select a cheaper but faster llm like j2-light |
Integration with NLP Cloud |
That's awesome @Pipboyguy - dm'ing on linkedin to learn more! |
@ishaan-jaff check out this truncate param in the cohere api This looks super interesting. Similar to your token trimmer. If the prompt exceeds context window, trim in a particular manner. I would maybe only run trimming on user/assistant messages. Not touch the system prompt (works for RAG scenarios as well). |
Option to use Inference API so we can use any model from Hugging Face 🤗 |
@haseeb-heaven you can already do this -
from litellm import completion
response = completion(model="huggingface/gpt2", messages=[{"role": "user", "content": "Hey, how's it going?"}])
print(response) |
Wow great thanks its working. Nice feature |
Support for inferencing using models hosted on Petals swarms (https://github.com/bigscience-workshop/petals), both public and private. |
@smig23 what are you trying to use petals for ? We found it to be quite unstable and it would not consistently pass our tests |
finetuning wrapper for openai, huggingface etc. |
@shauryr i created an issue to track this - feel free to add any missing details here |
Specifically for my aims, I'm running a private swarm as a experiment with a view to implementing with in private organization, who have idle GPU resources, but it's distributed. The initial target would be inferencing and if litellm was able to be the abstraction layer, it would allow flexibility to go another direction with hosting in the future. |
I wish the litellm to have a direct support for finetuning the model. Based on the below blog post, I understand that in order to fine tune, one needs to have a specific understanding on the LLM provider and then follow their instructions or library for fine tuning the model. Why not the LiteLLM do all the abstraction and handle the fine-tuning aspects as well? https://docs.litellm.ai/docs/tutorials/finetuned_chat_gpt |
I wish LiteLLM has a support for open-source embeddings like sentence-transformers, hkunlp/instructor-large etc. Sorry, based on the below documentation, it seems there's only support for the Open AI embedding. |
I wish LiteLLM has the integration to cerebrium platform. Please check the below link for the prebuilt-models. |
@ranjancse26 what models on cerebrium do you want to use with LiteLLM ? |
@ishaan-jaff The cerebrium has got a lot of pre-built model. The focus should be on consuming the open-source models first ex: Lama 2, GPT4All, Falcon, FlanT5 etc. I am mentioning this as a first step. However, it's a good idea to have the Litellm take care of the internal communication with the custom-built models too. In-turn based on the API which the cerebrium is exposing. |
@smig23 We've added support for petals to LiteLLM https://docs.litellm.ai/docs/providers/petals |
I wish Litellm has a built-in support for the majority of the provider operations than targeting the text generation alone. Consider an example of Cohere, the below one allows users to have conversations with a Large Language Model (LLM) from Cohere. |
I wish Litellm has a ton of support and examples for users to develop apps with RAG pattern. It's kind of mandatory to go with the standard best practices and we all wish to have the same support. |
I wish Litellm has use-case driven examples for beginners. Keeping in mind of the day-to-day use-cases, it's a good idea to come up with a great sample which covers the following aspects.
|
I wish Litellm to support for various known or popular vector db's. Here are couple of them to begin with.
|
I wish Litellm has a built-in support for performing the web-scrapping or to get the real-time data using known provider like serpapi. It will be helpful for users to build the custom AI models or integrate with the LLMs for performing the retrieval augmented based generation. https://serpapi.com/blog/llms-vs-serpapi/#serpapi-google-local-results-parser |
Auto-parsing of OpenRouter's models and pricing from https://openrouter.ai/models and auto-updating of the file https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json It should be easy. |
Do you have any plan to support Vercel AI SDK's stream protocol? It is very useful for most companies and using OpenAI streaming approach is limiting users for tool usage, generative UI and a lot more. https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol#data-stream-protocol |
Better Integration with langfuse's prompt management. |
@yigitkonur streaming with vercel sdk works with their openai integration currently @GildeshAbhay replied on the issue you created - sample code for how you'd want this to work would be helpful |
Support for Reranker API for Huggingface's Text Embedding Inference |
I wish litellm had module federation. With the fast-approaching era of real-time AI, loading only the necessary provider packages will be crucial in keeping system latency low. |
Feature Request: Request Throttling/Queueing for Rate Limit ManagementRelated to @denisergashbaev's comments here and here which perfectly describes this need. Desired Functionality+1 to the request for a global throttling mechanism with queuing. To expand on @denisergashbaev's description:
Current Solutions vs Desired BehaviorCurrent: Request PrioritizationThe current priority queue implementation (docs) focuses on prioritizing between requests but does not prevent rate limit errors. If there's only one deployment, requests will still fail when hitting rate limits rather than being queued. Current: Usage-based RoutingThe current routing strategy (docs) helps distribute load across multiple deployments but doesn't solve the fundamental issue of managing rate limits through queuing. Example Use Case
Desired behavior: If 100 requests come in within a minute
Benefits
This feature would be incredibly valuable for the community, as evidenced by multiple users requesting similar functionality. LiteLLM is already an amazing tool for LLM deployment management, and this addition would make it even more robust for production use cases. This feature would be incredibly valuable for the community. LiteLLM is already an amazing tool, I'm still testing it in multiple scenarios, but I think this addition would make it even more robust for production use cases. |
@databill86 requests which fail due to rate limit errors are kept in queue and retried until the timeout for the request is hit |
Thanks for the response! However, there's a crucial distinction to make here. The current retry mechanism can actually worsen rate limit issues, particularly with OpenAI:
The key difference is proactive vs reactive handling:
This would provide much better resource utilization and prevent the "cascade effect" where retries compound the rate limit problem. |
we wait based on the litellm/tests/local_testing/test_router.py Line 2349 in 1bef645
this already exists. use rate limit aware routing - https://docs.litellm.ai/docs/routing#advanced---routing-strategies-%EF%B8%8F |
Allow configuring API-baseurl for Currently only OpenAI, Azure and Vertex are supported. That would be nice to allow configuring the
|
Groups of models Provide a possibility to create groups of models (e.g. "Free tier models", "Public models", etc.), so that a specific virtual key can be given access to such groups. Currently virtual key can be given access only per team, which doesn't scale if many teams are present, and adding a new public model requires to edit all teams. |
I would love to be able to have the citations field included in the response body when using Perplexity. Currently, I was able to achieve this for non-streaming responses using the success hook, but I had no luck with streaming responses. |
@lazariv this already exists - https://docs.litellm.ai/docs/proxy/tag_routing |
Pixtral vision support - mistralai/Pixtral-Large-Instruct-2411 |
Adding tokenize and detokenize to the llm utils endpoints, please 🙏 |
I wish litellm would support: "updating assistants" through "PATCH /assistants/:assistantId", deleting Threads through "DELETE /threads/:threadId". Else: Very great project! |
Support Xinference rerank model |
I wish there was support for local stable diffusion and/or comfyui |
Embedding models on langchain. (Currently only Chat Interface exists) |
I wish this beautiful library supported Bedrock Inference Profiles. We use them to attribute costs. |
I wish it had an abstraction to submit traces to its different logging backends like langfuse and friends, |
Have you thought about adding a "meta model" option where a user could specify
And litellm with everything it knows would just pick the best available model. I see you have a json file with pricing and model capabilities. I didn't see anything like this exists nor did gemini research find anything. https://g.co/gemini/share/e704a93c8938 This would require collecting data on all the benchmarks, e.g. how well each did on coding benchmarks vs, others to make a selection. You have the data on cost. I didn't check if you have tokens per second. There is probably some memory required - e.g. validate that project X works on each of the models due to the variations in model execution. but once you do a "benchmark" pass to validate functionality against various tests, it becomes a preferred model selection when considering ther algorithm on which one to pick. I'm asking about this is because it feels like one of the major taxes of setting up a new project or 3rd party/oss project is figuring out which model to use, optimize for cost, etc. Sometimes I have a more powerful machine on my local network with ollama I want to use when it's available, other times use a cloud service or my local ollama. I want that to all happen automagically... e.g. use AI to select the AI model |
Another suggestion. I'm lazy, I don't want to read all of your docs to figure out the answer to what I want. I want to ask ChatGPT, Claude, Geimini, etc to get the answer for me. thing is they aren't very good at browsing your website yet. one suggestion is to create a serialized version of the docs in a /llms.txt like https://llmstxt.org/ and I can just feed it this url. hopefully eventually they get smart enough to look for this if it exists. For now I'll use https://uithub.com/BerriAI/litellm/tree/main/docs/my-website/docs?accept=text/html&maxTokens=50000&ext=md but this isn't well known and it may not contain what you want to prioritize in the index. Ideally you'd also have links on your site off to "Ask ChatGPT about these docs" with a input box which then opens https://chat.openai.com/?q=https%3A%2F%2Fdocs.litellm.ai%2Fllms.txt+yourquery&model=gpt-4o sort of like the old google site search... hopefully we don't have to do that too long. something would also enable is "I'm using litellm... analyze my code and look at llms.txt and see what other features I should consider leveraging" |
I wish I could enable citations for perplexity on litellm via the config.yaml so I would get citations in open-webui. |
@d4g we already return the perplexity citations. If there's a Param needed just add it under 'litellm_params' |
Where and how? In the yaml? |
just checked perplexity doc. no param needed, it should be returned automatically (see the 200 status code response) - https://docs.perplexity.ai/api-reference/chat-completions For any provider-specific param, see here - https://docs.litellm.ai/docs/completion/provider_specific_params#proxy-usage |
This is a ticket to track a wishlist of items you wish LiteLLM had.
COMMENT BELOW 👇
With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond with ❤️ to any request you would also like to see
P.S.: Come say hi 👋 on the Discord
The text was updated successfully, but these errors were encountered: