You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we add output_pydantic to our task and we are using azure open-ai models then the internal instructor unable to convert it into pydantic output
Steps to Reproduce
Use azure open-ai for testing with env variables AZURE_API_KEY, AZURE_API_BASE,AZURE_API_VERSION
use model gpt-4o-mini model as Crew LLM
set output_pydantic to your BaseModel output
start executing crew
Failed to convert text into a pydantic model due to the following error: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable Using raw output instead.
Expected behavior
It should understand that we are using azure operator and have to use those parameters to convert it into a pydantic output
Screenshots/Code snippets
from crewai import Agent, Task, Process, LLM, Crew
from pydantic import BaseModel, Field
class Output(BaseModel):
"""Output class for the task"""
joke: str = Field(..., description="Joke string")
class TestCrew:
def __init__(self):
self.llm = LLM(model="gpt-4o-mini", api_version="2024-09-01-preview", azure=True)
def test_agent(self):
return Agent(
role="User Assistant",
goal="You help the user to find and resolve the user query",
backstory="Experienced assistant to help user to resolve their query.",
llm=self.llm,
verbose=True,
)
def test_task(self, query):
return Task(
description=f"""resolve the user query below
{query}.""",
agent=self.test_agent(),
expected_output="final output should be summary of outcome.",
output_pydantic=Output,
)
def run(self):
input = """
Tell me a joke
"""
testagent = self.test_agent()
testtask = self.test_task(input)
crew = Crew(
agents=[testagent],
tasks=[testtask],
verbose=True,
)
result = crew.kickoff()
return result
if __name__ == "__main__":
crew = TestCrew()
crew.run()
Operating System
Ubuntu 22.04
Python Version
3.10
crewAI Version
0.76.9
crewAI Tools Version
0.13.4
Virtual Environment
Venv
Evidence
Possible Solution
crewai\utilities\internal_instructor.py line number 23
def set_instructor(self):
"""Set instructor."""
if self.agent and not self.llm:
self.llm = self.agent.function_calling_llm or self.agent.llm
# Lazy import
import instructor
from litellm import completion
self._client = instructor.from_litellm(
completion,
mode=instructor.Mode.TOOLS,
)
instead of directly using OPENAI provider can provide any overriding value to user other operators
instructor\client.py
class Instructor:
client: Any | None
create_fn: Callable[..., Any]
mode: instructor.Mode
default_model: str | None = None
provider: Provider
Description
When we add
output_pydantic
to our task and we are using azure open-ai models then the internal instructor unable to convert it into pydantic outputSteps to Reproduce
AZURE_API_KEY, AZURE_API_BASE,AZURE_API_VERSION
Failed to convert text into a pydantic model due to the following error: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable Using raw output instead.
Expected behavior
It should understand that we are using azure operator and have to use those parameters to convert it into a pydantic output
Screenshots/Code snippets
Operating System
Ubuntu 22.04
Python Version
3.10
crewAI Version
0.76.9
crewAI Tools Version
0.13.4
Virtual Environment
Venv
Evidence
Possible Solution
crewai\utilities\internal_instructor.py
line number23
instead of directly using OPENAI provider can provide any overriding value to user other operators
instructor\client.py
class Instructor:
client: Any | None
create_fn: Callable[..., Any]
mode: instructor.Mode
default_model: str | None = None
provider: Provider
Additional context
crewai\utilities\converter.py
in classConverter
if the scenario not get into
if self.llm.supports_function_calling():
in line21
then it uses existing llm instance to call the completion. that works fine.
The text was updated successfully, but these errors were encountered: