LiteLLM v1+ - code cleanup #1280
ishaan-jaff
started this conversation in
General
Replies: 4 comments 3 replies
-
We should remove support for Batch completions: https://docs.litellm.ai/docs/completion/batching Why ?
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Use Cache Context ManagerWe should update our docs to use Cache context managers as the primary way of using Caching Key Change on this Quick Start Dochttps://docs.litellm.ai/docs/caching/redis_cache#quick-start litellm.enable_cache(
type: Optional[Literal["local", "redis"]] = "local",
host: Optional[str] = None,
port: Optional[str] = None,
password: Optional[str] = None,
supported_call_types: Optional[
List[Literal["completion", "acompletion", "embedding", "aembedding"]]
] = ["completion", "acompletion", "embedding", "aembedding"],
**kwargs,
) Instead of litellm.cache = Cache(type="redis", host=<host>, port=<port>, password=<password>) Why change ?
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Rename
|
Beta Was this translation helpful? Give feedback.
2 replies
-
Deprecate: litellm.completion_with_config Why?
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Starting this discussion to remove unused functions / features. It's crucial this package remains lite
Beta Was this translation helpful? Give feedback.
All reactions