Add fp8 support for llama model family on Navi4x #873
Annotations
6 errors
Analysing the code with ruff:
vllm/model_executor/layers/quantization/fp8.py#L430
vllm/model_executor/layers/quantization/fp8.py:430:81: E501 Line too long (83 > 80)
|
Analysing the code with ruff:
vllm/model_executor/models/llama.py#L90
vllm/model_executor/models/llama.py:90:81: E501 Line too long (101 > 80)
|
Analysing the code with ruff:
vllm/model_executor/models/llama.py#L97
vllm/model_executor/models/llama.py:97:81: E501 Line too long (99 > 80)
|
Analysing the code with ruff:
vllm/model_executor/models/llama.py#L231
vllm/model_executor/models/llama.py:231:81: E501 Line too long (101 > 80)
|
Analysing the code with ruff:
vllm/utils.py#L430
vllm/utils.py:430:81: E501 Line too long (82 > 80)
|
Analysing the code with ruff
Process completed with exit code 1.
|
Loading