Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenVino GPU single matmul test #379

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

WangJialei-A
Copy link
Contributor

This PR depends on a develop repo and branch 'dchigarev/openvino/gc-gpu' now

So it is not ready for merge.

But it is ready for review.

Support single matmul test with openvino on GPU.

It will be automatically trigger remotely by nightly test in gc-perf repository.

@kurapov-peter Please comment the PR if you need run more test case or collect more metrics.

cmake -B build -G Ninja -DLLVM_DIR=${LLVM_INST_PATH}/lib/cmake/llvm -DMLIR_DIR=${LLVM_INST_PATH}/lib/cmake/mlir -DENABLE_GRAPH_COMPILER=ON -DENABLE_INTEL_GPU=ON -DENABLE_TESTS=ON
cmake --build build --target all

- name: Benchmark
Copy link
Contributor

@dchigarev dchigarev Oct 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also add this step that runs sanity tests for the gpu integration in OV:

OV_MLIR_MODE=GC_GPU ./bin/intel64/Release/ov_gpu_func_tests --gtest_filter=MLIRExecution.*

@WangJialei-A WangJialei-A force-pushed the wangjial/ov_test branch 4 times, most recently from 71a4de7 to 02640c7 Compare October 15, 2024 03:17
run: |
pip install openvino torch
for param in 'linear[512,512,512]' 'linear[1024,1024,1024]' 'linear[2048,2048,2048]', 'linear[4096,4096,4096]' 'linear[8192,8192,8192]' 'linear[4096,512,4096]'; do
python3 ./tools/mlir_bench/ov_model_gen.py -l=$param -t f16 -n test.xml
Copy link
Contributor

@dchigarev dchigarev Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the root-cause for the error we see in CI is this bug (#360)

loc(fused<{name = "aten::linear/Add", type = "Add"}>["aten::linear/Add"]): error: operand #0 does not dominate this use

The GPU pipeline is currently not working for models generated by ov_model_gen.py; only simple matmul modules like this are functioning (but I'm already working to fix this)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants