Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Ollama, Palm, Claude-2, Cohere, Replicate Llama2 CodeLlama (100+LLMs) - using LiteLLM #432

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ishaan-jaff
Copy link

@ishaan-jaff ishaan-jaff commented Oct 4, 2023

--- PR TEMPLATE INSTRUCTIONS (1) ---

Looking to submit a Hamilton Dataflow to the sf-hamilton-contrib module? If so go the the Preview tab and select the appropriate sub-template:

Else, if not, please remove this block of text.

--- PR TEMPLATE INSTRUCTIONS (2) ---

[Short description explaining the high-level reason for the pull request]
This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/
LiteLLM is a lightweight package to simplify LLM API calls - use any llm as a drop in replacement for gpt-3.5-turbo.

Example

from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

Changes

How I tested this

Tested locally

Notes

Checklist

  • PR has an informative and human-readable title (this will be pulled into the release notes)
  • Changes are limited to a single goal (no scope creep)
  • Code passed the pre-commit check & code is left cleaner/nicer than when first encountered.
  • Any change in functionality is tested
  • New functions are documented (with a description, list of inputs, and expected output)
  • Placeholder code is flagged / future TODOs are captured in comments
  • Project documentation has been updated if adding/changing functionality.

@ishaan-jaff
Copy link
Author

@skrawcz @elijahbenizzy can I get a review on this PR ?

@ishaan-jaff
Copy link
Author

@skrawcz you just need to pass that in the model parameter to completion()

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

if you want to explicitly set a provider you can pass it like this:

response = completion(
            model="bedrock/anthropic.claude-instant-v1", 
            messages=[{ "content": "Hello, how are you?","role": "user"}]
)

@ishaan-jaff
Copy link
Author

Did that answer your question ?

@skrawcz
Copy link
Collaborator

skrawcz commented Oct 4, 2023

@ishaan-jaff we should change this example to not be openai specific then I think. That would then showcase this functionality a bit more. E.g. generalize this example beyond openai.

@skrawcz
Copy link
Collaborator

skrawcz commented Oct 16, 2023

@ishaan-jaff okay I've thought about this more and I think this is a good addition, under the assumption that this library stays up to date with all the other APIs out there.

To get this over the line though, I think we need to refactor the examples to not be open ai specific, and wire through the ability to change the provider appropriately. That will showcase things better and we can tweet about it, etc.

@skrawcz
Copy link
Collaborator

skrawcz commented Nov 21, 2023

@ishaan-jaff adding litellm to this #548 ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants