Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Pydantic output with azure function calling models failed with litellm auth error #1572

Open
mohd-jubair opened this issue Nov 9, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@mohd-jubair
Copy link

mohd-jubair commented Nov 9, 2024

Description

When we add output_pydantic to our task and we are using azure open-ai models then the internal instructor unable to convert it into pydantic output

Steps to Reproduce

  1. Use azure open-ai for testing with env variables AZURE_API_KEY, AZURE_API_BASE,AZURE_API_VERSION
  2. use model gpt-4o-mini model as Crew LLM
  3. set output_pydantic to your BaseModel output
  4. start executing crew
  5. Failed to convert text into a pydantic model due to the following error: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable Using raw output instead.

Expected behavior

It should understand that we are using azure operator and have to use those parameters to convert it into a pydantic output

Screenshots/Code snippets


from crewai import Agent, Task, Process, LLM, Crew
from pydantic import BaseModel, Field


class Output(BaseModel):
    """Output class for the task"""
    joke: str = Field(..., description="Joke string")

class TestCrew:
    def __init__(self):
            self.llm = LLM(model="gpt-4o-mini", api_version="2024-09-01-preview", azure=True)

    def test_agent(self):
        return Agent(
            role="User Assistant",
            goal="You help the user to find and resolve the user query",
            backstory="Experienced assistant to help user to resolve their query.",
            llm=self.llm,
            verbose=True,
        )
    def test_task(self, query):
        return Task(
            description=f"""resolve the user query below
             {query}.""",
            agent=self.test_agent(),
            expected_output="final output should be summary of outcome.",
            output_pydantic=Output,
            )

    def run(self):
        input = """
          Tell me a joke
        """
        testagent = self.test_agent()
        testtask = self.test_task(input)
        crew = Crew(
          agents=[testagent],
          tasks=[testtask],
          verbose=True,
        )
        result = crew.kickoff()
        return result
       
if __name__ == "__main__":
    crew = TestCrew()
    crew.run()

Operating System

Ubuntu 22.04

Python Version

3.10

crewAI Version

0.76.9

crewAI Tools Version

0.13.4

Virtual Environment

Venv

Evidence

evidence

Possible Solution

crewai\utilities\internal_instructor.py line number 23

def set_instructor(self):
    """Set instructor."""
    if self.agent and not self.llm:
        self.llm = self.agent.function_calling_llm or self.agent.llm

    # Lazy import
    import instructor
    from litellm import completion

    self._client = instructor.from_litellm(
        completion,
        mode=instructor.Mode.TOOLS,
    )

instead of directly using OPENAI provider can provide any overriding value to user other operators

instructor\client.py

class Instructor:
client: Any | None
create_fn: Callable[..., Any]
mode: instructor.Mode
default_model: str | None = None
provider: Provider

def __init__(
    self,
    client: Any | None,
    create: Callable[..., Any],
    mode: instructor.Mode = instructor.Mode.TOOLS,
    provider: Provider = Provider.OPENAI,
    **kwargs: Any,
):

Additional context

crewai\utilities\converter.py in class Converter

if the scenario not get into if self.llm.supports_function_calling(): in line 21

then it uses existing llm instance to call the completion. that works fine.

            return self.llm.call(
                [
                    {"role": "system", "content": self.instructions},
                    {"role": "user", "content": self.text},
                ]
            )
@mohd-jubair mohd-jubair added the bug Something isn't working label Nov 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant