Skip to content

Add fp8 support for llama model family on Navi4x #873

Add fp8 support for llama model family on Navi4x

Add fp8 support for llama model family on Navi4x #873

Triggered via pull request October 25, 2024 05:32
Status Failure
Total duration 22s
Artifacts

ruff.yml

on: pull_request
Matrix: ruff
Fit to window
Zoom out
Zoom in

Annotations

28 errors
Ruff (E501): vllm/model_executor/layers/quantization/fp8.py#L430
vllm/model_executor/layers/quantization/fp8.py:430:81: E501 Line too long (83 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L90
vllm/model_executor/models/llama.py:90:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L97
vllm/model_executor/models/llama.py:97:81: E501 Line too long (99 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L231
vllm/model_executor/models/llama.py:231:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/utils.py#L430
vllm/utils.py:430:81: E501 Line too long (82 > 80)
ruff (3.12)
Process completed with exit code 1.
Ruff (E501): vllm/model_executor/layers/quantization/fp8.py#L430
vllm/model_executor/layers/quantization/fp8.py:430:81: E501 Line too long (83 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L90
vllm/model_executor/models/llama.py:90:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L97
vllm/model_executor/models/llama.py:97:81: E501 Line too long (99 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L231
vllm/model_executor/models/llama.py:231:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/utils.py#L430
vllm/utils.py:430:81: E501 Line too long (82 > 80)
ruff (3.8)
Process completed with exit code 1.
ruff (3.10)
The job was canceled because "_3_12" failed.
Ruff (E501): vllm/model_executor/layers/quantization/fp8.py#L430
vllm/model_executor/layers/quantization/fp8.py:430:81: E501 Line too long (83 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L90
vllm/model_executor/models/llama.py:90:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L97
vllm/model_executor/models/llama.py:97:81: E501 Line too long (99 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L231
vllm/model_executor/models/llama.py:231:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/utils.py#L430
vllm/utils.py:430:81: E501 Line too long (82 > 80)
ruff (3.10)
Process completed with exit code 1.
ruff (3.11)
The job was canceled because "_3_12" failed.
Ruff (E501): vllm/model_executor/layers/quantization/fp8.py#L430
vllm/model_executor/layers/quantization/fp8.py:430:81: E501 Line too long (83 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L90
vllm/model_executor/models/llama.py:90:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L97
vllm/model_executor/models/llama.py:97:81: E501 Line too long (99 > 80)
Ruff (E501): vllm/model_executor/models/llama.py#L231
vllm/model_executor/models/llama.py:231:81: E501 Line too long (101 > 80)
Ruff (E501): vllm/utils.py#L430
vllm/utils.py:430:81: E501 Line too long (82 > 80)
ruff (3.11)
Process completed with exit code 1.
ruff (3.9)
The job was canceled because "_3_12" failed.
ruff (3.9)
The operation was canceled.