Can you also proxy to embeddings endpoint? #863
Closed
kengoodridge
started this conversation in
Ideas
Replies: 2 comments
-
Hi @kengoodridge thanks for the issue will work on this today here's our supported embedding models btw: https://docs.litellm.ai/docs/embedding/supported_embedding |
Beta Was this translation helpful? Give feedback.
0 replies
-
This is now live - @kengoodridge litellm/litellm/proxy/proxy_server.py Line 607 in 8291f23 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Could you find the time to proxy to embeddings, supporting openai api, specifically for ollama, but I would guess you should do it for any of the llm providers that support it. I don't know how general embeddings support is.
I think it would be particularly useful and clean for RAG.
Thanks! Find this package invaluable.
Beta Was this translation helpful? Give feedback.
All reactions