Skip to content

System role problem running Gemma 2 on vLLM #1386

Closed
@juanjuanignacio

Description

@juanjuanignacio

Hello,

In running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,
in env.local:
{
"name": "google/gemma-2-2b-it",
"id": "google/gemma-2-2b-it",
"chatPromptTemplate": "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @FIRST}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 2048,
"stop": ["<end_of_turn>"]
},
"endpoints": [
{
"type": "openai",
"baseURL": "http://127.0.0.1:8000/v1",

  }
]

}

and I always have the same response in vllm server:

ERROR 08-05 12:39:06 serving_chat.py:118] Error in applying chat template from request: System role not supported
INFO: 127.0.0.1:42142 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

do someone know if I have to change and how do change the chat template or deactivate system role ? is it a vllm problem or a chat ui problem?

Thank U!

Metadata

Metadata

Assignees

No one assigned

    Labels

    supportA request for help setting things up

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions