Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HOTFIX] Fix the error in format function by adding system message #472

Merged
merged 1 commit into from
Oct 24, 2024

Conversation

DavdGao
Copy link
Collaborator

@DavdGao DavdGao commented Oct 24, 2024

Description

I found the current format strategy leads to misunderstanding.

For example, the following formatted message will be inserted a system prompt automatically by the API provider, and the LLM will refuse to act as the new role, e.g. "Friday"

prompt = [
    {"role": "user", "content": "You're a helpful assistant named Friday\\n#Conversation History\\nuser:Hello!"}
]

With the above prompt, Qwen-max responses "I'm Qwen, not Friday."

Solution

We check if there is a system prompt, and put it at the beginning of the formatted prompt.

prompt = [
    {"role": "system", "content": "You're a helpful assistant named Friday"},
    {"role": "user", "content": "#Conversation History\\nuser:Hello!"}
]

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@DavdGao DavdGao changed the title Fix the error in format function by adding system message [HOTFIX] Fix the error in format function by adding system message Oct 24, 2024
@DavdGao DavdGao added the bug Something isn't working label Oct 24, 2024
Copy link
Collaborator

@xieyxclack xieyxclack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but I am not sure if all LLM services support setting system prompts (i.e., providing {"role": "system", "content": "xxx"} ).

@DavdGao
Copy link
Collaborator Author

DavdGao commented Oct 24, 2024

LGTM, but I am not sure if all LLM services support setting system prompts (i.e., providing {"role": "system", "content": "xxx"} ).

The involved model APIs in this PR include Ollama, DashScope, LiteLLM, Yi and Zhipu. Currently, we are sure Ollama, DashScope, Yi and Zhipu support system prompt. While LiteLLM will handle the system prompt within its library (If not supported, it will convert system message into user message)

Copy link
Collaborator

@ZiTao-Li ZiTao-Li left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@DavdGao DavdGao merged commit 9cde8a6 into modelscope:main Oct 24, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants