[Feature]: Custom Model Endpoint Support #1485
Closed
georgeseifada
started this conversation in
General
Replies: 3 comments 1 reply
-
Already supported via openai compatible endpoints support - https://docs.litellm.ai/docs/providers/openai_compatible Anything additional you require here? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I'm not sure how that would help with an API that's different? For example, let's say we have a new LLM Embedding API that has:
How can I use the new LLM Embedding API with LiteLLM, with all the standard functionality of routing, fallbacks, etc? |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The Feature
Let's say OpenAI comes out with GPT-5 and it has a totally different API. Or Amazon releases a model called Bezos-AI. Users should be able to easily add support for that model's API, to seamlessly integrate it with routing, fallbacks, etc without waiting for it to be officially supported in the library.
Motivation, pitch
The library should be as extensible as possible for users to be able to rely on it long-term.
Twitter / LinkedIn details
No response
Beta Was this translation helpful? Give feedback.
All reactions