Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]CrewAgentExecutorMixin._create_long_term_memory fail to do long memory saving while using azure openai as llm #1518

Open
guiding opened this issue Oct 28, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@guiding
Copy link

guiding commented Oct 28, 2024

Description

I use azure openai with customized endpoint, model name. And the Agent will fail at CrewAgentExecutorMixin._create_long_term_memory.

Steps to Reproduce

1.Use customized azure openai endpoint and module name.
azure_llm = LLM(
model=MODEL_NAME,
base_url=CHAT_API,
api_key=API_KEY,
api_version=API_VERSION,
extra_headers={xxxxxx}
)
2.Enable memory in Crew
tech_crew = Crew(
agents=[researcher],
tasks=[research_task],
memory=True,
process=Process.sequential, # Tasks will be executed one after the other
)
3.enable litellm log by setting it's set_verbose=True
4. Call tech_crew.kickoff() and check the result should have no error

Expected behavior

Agent can run without any error.

Screenshots/Code snippets

Python310\Lib\site-packages\crewai\utilities internal_instructor.py

the to_pydantic function calling litellme without passing full parameters, and cause customized llm param miss.

def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})

    model = self._client.chat.completions.create(
        model=self.llm.model, response_model=self.model, messages=messages
    )
    return model

Operating System

Windows 10

Python Version

3.10

crewAI Version

0.76.2

crewAI Tools Version

0.13.2

Virtual Environment

Venv

Evidence

error log:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

15:51:15 - LiteLLM:DEBUG: utils.py:4328 - Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers
DEBUG:LiteLLM:Error occurred in getting api base - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aide-gpt-4o-mini
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Possible Solution

Suggested solution:
Follow llm.py, set detail params while calling llm

def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})

    params = {
        "model": self.llm.model,
        "messages": self.llm.messages,
        "timeout": self.llm.timeout,
        "temperature": self.llm.temperature,
        "top_p": self.llm.top_p,
        "n": self.llm.n,
        "stop": self.llm.stop,
        "max_tokens": self.llm.max_tokens or self.max_completion_tokens,
        "presence_penalty": self.llm.presence_penalty,
        "frequency_penalty": self.llm.frequency_penalty,
        "logit_bias": self.llm.logit_bias,
        "response_format": self.llm.response_format,
        "seed": self.llm.seed,
        "logprobs": self.llm.logprobs,
        "top_logprobs": self.llm.top_logprobs,
        "api_base": self.llm.base_url,
        "api_version": self.llm.api_version,
        "api_key": self.llm.api_key,
        "stream": False,
        "response_model":self.model,
        **self.llm.kwargs,
    }
    # Remove None values to avoid passing unnecessary parameters
    params = {k: v for k, v in params.items() if v is not None}
    model = self._client.chat.completions.create(params)

    return model

Additional context

No.

@guiding guiding added the bug Something isn't working label Oct 28, 2024
@bhancockio
Copy link
Collaborator

Hey @guiding !

What is aide-gpt-4o-mini?

The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models:
https://docs.litellm.ai/docs/providers

@guiding
Copy link
Author

guiding commented Nov 3, 2024

Hey @guiding !

What is aide-gpt-4o-mini?

The error you're getting looks like you're trying to use a model that doesn't exist. Here's a list of all valid Azure models: https://docs.litellm.ai/docs/providers

It is an internal proxy endpoint model name base on azure openai, only used in our company, and served in azure cloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants