Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAILLMContext has no attribute 'append' when used with LangchainProcessor #361

Open
agilebean opened this issue Aug 9, 2024 · 3 comments
Assignees

Comments

@agilebean
Copy link

agilebean commented Aug 9, 2024

Current Code

Used the pipecat example code here to define the context and pass it to OpenAILLMContext.

        messages = [
            {
                "role": "system",
                "content": "You are a helpful LLM...",
            },
        ]

        context = OpenAILLMContext(messages, tools)
        tma_in = LLMUserContextAggregator(context)
        tma_out = LLMAssistantContextAggregator(context)

Expected Behavior

The system message is passed to the LLM when instantiated by LangchainProcessor the same way as by OpenAILLMService.
If so, the system role would be recorded as follows:

Generating chat: [{"role": "system", "content": "Role:\nYou are an experienced ...

Current Behavior

At the first invocation of the LLM, this error is thrown:

AttributeError: 'OpenAILLMContext' object has no attribute 'append'

with traceback

.../python3.12/site-packages/pipecat/processors/aggregators/llm_response.py", line 146, in _push_aggregation
    self._messages.append({"role": self._role, "content": self._aggregation})

Caveat

Further testing confirmed: This error occurs when OpenAILLMContext used with LangchainProcessor. In contrast, using an LLM instantiated directly from the API works:

            llm = OpenAILLMService(
                api_key=os.getenv("OPENAI_API_KEY"),
                model="gpt-4o")

            context = OpenAILLMContext(messages=messages)

            tma_in = LLMUserResponseAggregator(messages)
            tma_out = LLMAssistantResponseAggregator(messages)

           async def on_first_participant_joined(transport, participant):
                transport.capture_participant_transcription(participant["id"])
                # chain.set_participant_id(participant["id"])

                id = participant["id"]

                time.sleep(1.5)

                print(f"Context is: {context}")
                await task.queue_frames([OpenAILLMContextFrame(context)])
@agilebean agilebean changed the title OpenAILLMContext has no attribute 'append' OpenAILLMContext has no attribute 'append' when used with LangchainProcessor Aug 9, 2024
@agilebean
Copy link
Author

Just found out that

await task.queue_frames([OpenAILLMContextFrame(context)])

doesn't have any response at all if the LLM is instantiated as LangchainProcessor.

In conclusion, the above error message is generated most likely by

LLMUserContextAggregator(context)
LLMAssistantContextAggregator(context)

@agilebean
Copy link
Author

Can any Pipecat contributor please answer or give a hint on this issue?
The question how to pass the context is very important for function calling...

@Vortigern-source
Copy link

I still get this error when using an openai LLM. Not sure why

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants