You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.
Describe the bug
The docstring for the function OllamaChatWrapper.format states that the role should be "user" in the input messages list. However, the actual implementation sets the role to "system", causing the LLM to only receive system messages and no user messages, and ollama_chat_llama3.1 seems to not respond to system messages in isolation
2024-09-13 16:31:17.378 | INFO | agentscope.models.model:__init__:203 - Initialize model by configuration [ollama]
Assistant:
User input: Hello
User: Hello
Assistant:
User input: what's ur name
User: what's ur name
Assistant:
Debugging found that the message returned by LLM was an empty string.
If OllamaChatWrapper.format is modified according to the documentation string, the content can be output normally.
When I use ollama_chat_llama3, this problem does not occur.
Perhaps a more general solution can be adopted, such as:
Fix the role in the implementation to be "user" as described in the docstring.
(Optional) Update the logic to follow a more common pattern where system messages are in the "system" role and conversation history is included in the "user" role. This might improve clarity and consistency.
The text was updated successfully, but these errors were encountered:
According to ollama official documentation, there is not much difference between the templates of llama3 and llama3.1 when there is only one message with the role set to "system".
We successfully tested Llama2, Llama3, Qwen:0.5, and Phi when the role was changed from "system" to "user" in #443 .
Regarding the suggestion to split into system message and user message, we need to conduct more testing before making any modifications.
AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.
Describe the bug
The docstring for the function OllamaChatWrapper.format states that the role should be "user" in the input messages list. However, the actual implementation sets the role to "system", causing the LLM to only receive system messages and no user messages, and ollama_chat_llama3.1 seems to not respond to system messages in isolation
To Reproduce
Steps to reproduce the behavior:
/examples/conversation_basic/conversation.py
add model_config in agent.scope.init:
Start a conversation:
Debugging found that the message returned by LLM was an empty string.
If OllamaChatWrapper.format is modified according to the documentation string, the content can be output normally.
OllamaChatWrapper.format docstring
agentscope/src/agentscope/models/ollama_model.py
Lines 288 to 302 in 6c823a9
but the actual implementation is:
agentscope/src/agentscope/models/ollama_model.py
Lines 361 to 369 in 6c823a9
When I use ollama_chat_llama3, this problem does not occur.
Perhaps a more general solution can be adopted, such as:
Fix the role in the implementation to be "user" as described in the docstring.
(Optional) Update the logic to follow a more common pattern where system messages are in the "system" role and conversation history is included in the "user" role. This might improve clarity and consistency.
The text was updated successfully, but these errors were encountered: