Skip to content

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines #21

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines

[TESTS] Use FP32 inference precision, FP16 KV cache precision for pipelines #21

Triggered via pull request January 6, 2025 17:17
Status Cancelled
Total duration 32s
Artifacts

genai-tools.yml

on: pull_request
Download OpenVINO
0s
Download OpenVINO
Matrix: LLM bench tests
Matrix: WWB tests
ci/gha_overall_status_llm_bench
0s
ci/gha_overall_status_llm_bench
Fit to window
Zoom out
Zoom in

Annotations

2 errors and 1 warning
Download OpenVINO
Canceling since a higher priority waiting request for 'refs/pull/1485/merge-llm-bench-python' exists
ci/gha_overall_status_llm_bench
Process completed with exit code 1.
ci/gha_overall_status_llm_bench
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636