Skip to content

sycl : implementation of reordered Q4_0 MMVQ for Intel GPUs #12858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

Alcpz
Copy link
Collaborator

@Alcpz Alcpz commented Apr 10, 2025

This PR extends the work introduced in #12035.
MMVQ Q4_0 now supports the block_q_t reorder layout.

The improvements are reflected in Text generation. The improvement of PP512 in the DataMax 1100 is noise.

The PR includes:

  • A refactor of vecdot traits, defined in the reorder_vec_dot_q_sycl struct.
  • A new entrypoint for reordered MMVQ vecdots reorder_mul_mat_vec_q4_0_q8_1_sycl
  • The new helper function safe_div, to be more consistent with the naming of other backends for ceil/roundup division

Still pending TODOs:

  • Improve and find a proper location for the comment describing the reordered layout
  • Default to DMMV if the reordered Q4_0 is not supported.
  • Get the performance for an Arc7X0

Benchmarking

Compiler: ICPX 2025.1

Builds:

GPU & Drivers:

  • Intel(R) Arc(TM) B580 Graphics 20.1.0 [1.6.32567+16]
  • Intel(R) Data Center GPU Max 1100 12.60.7 [1.6.32567+18]
  • Lunar Lake, Intel(R) Arc(TM) Graphics 20.4.4 [1.6.32567+16] (iGPU)
  • Intel(R) Arc(TM) A770 Graphics 12.55.8 [1.6.32567+18]

DISABLE_OPT is the value of GGML_SYCL_DISABLE_OPT

GPU model backend ngl DISABLE_OPT (02082f1) pp512 (44e199d) pp512 (02082f1) tg128 (44e199d) tg128
B580 qwen2 1.5B Q4_0 SYCL 99 0 6286.16 ± 14.00 6233.05 ± 22.66 105.35 ± 1.70 134.61 ± 5.38
B580 llama 7B Q4_0 SYCL 99 0 1649.27 ± 1.84 1648.96 ± 2.41 40.97 ± 0.19 65.21 ± 0.21
B580 phi3 3B Q4_0 SYCL 99 0 2461.62 ± 3.06 2462.38 ± 3.46 62.36 ± 0.43 94.31 ± 0.20
B580 qwen2 1.5B Q4_0 SYCL 99 1 7863.81 ± 30.10 7813.15 ± 55.52 100.45 ± 2.72 96.97 ± 0.32
B580 llama 7B Q4_0 SYCL 99 1 2211.87 ± 1.64 2212.20 ± 1.83 40.03 ± 0.22 39.85 ± 0.08
B580 phi3 3B Q4_0 SYCL 99 1 3133.46 ± 5.73 3132.75 ± 4.61 61.17 ± 0.34 61.75 ± 0.45
GPU model backend ngl DISABLE_OPT (02082f1) pp512 (44e199d) pp512 (02082f1) tg128 (44e199d) tg128
DataMax 1100 qwen2 1.5B Q4_0 SYCL 99 0 6759.80 ± 38.41 7272.88 ± 40.08 121.96 ± 1.07 143.40 ± 0.90
DataMax 1100 llama 7B Q4_0 SYCL 99 0 1778.88 ± 6.92 1793.16 ± 7.07 56.72 ± 0.25 71.40 ± 0.41
DataMax 1100 phi3 3B Q4_0 SYCL 99 0 2863.51 ± 13.92 2867.34 ± 4.07 92.18 ± 0.20 110.15 ± 0.57
DataMax 1100 qwen2 1.5B Q4_0 SYCL 99 1 9169.12 ± 142.33 9350.59 ± 60.30 94.20 ± 0.43 94.29 ± 0.41
DataMax 1100 llama 7B Q4_0 SYCL 99 1 2543.61 ± 8.16 2553.34 ± 22.99 36.27 ± 0.09 36.61 ± 0.07
DataMax 1100 phi3 3B Q4_0 SYCL 99 1 3952.37 ± 24.30 3938.14 ± 23.66 66.91 ± 0.07 67.24 ± 0.15
GPU model backend ngl DISABLE_OPT (02082f1) pp512 (44e199d) pp512 (02082f1) tg128 (44e199d) tg128
Arc 140V qwen2 1.5B Q4_0 SYCL 99 0 1100.09 ± 1.13 1127.81 ± 35.95 38.14 ± 0.38 46.03 ± 0.21
Arc 140V llama 7B Q4_0 SYCL 99 0 316.47 ± 0.41 321.85 ± 5.92 13.09 ± 0.73 20.52 ± 0.04
Arc 140V phi3 3B Q4_0 SYCL 99 0 512.94 ± 0.32 515.34 ± 1.68 20.49 ± 0.33 30.56 ± 0.10
Arc 140V qwen2 1.5B Q4_0 SYCL 99 1 1492.78 ± 60.74 1514.96 ± 55.74 34.06 ± 0.23 33.80 ± 1.07
Arc 140V llama 7B Q4_0 SYCL 99 1 519.44 ± 0.78 393.29 ± 17.66 11.69 ± 0.45 12.04 ± 0.94
Arc 140V phi3 3B Q4_0 SYCL 99 1 752.71 ± 21.10 787.60 ± 6.11 18.77 ± 0.04 18.85 ± 0.10

Update:

GPU model backend ngl GGML_SYCL_DISABLE_OPT (02082f1) pp512 (44e199d) pp512 (02082f1) tg128 (44e199d) tg128
Arc 770 qwen2 1.5B Q4_0 SYCL 99 0 4129.32 ± 5.55 4147.61 ± 2.76 41.90 ± 0.39 45.46 ± 0.23
Arc 770 llama 7B Q4_0 SYCL 99 0 1482.81 ± 2.52 1488.96 ± 3.57 26.50 ± 0.34 34.37 ± 0.03
Arc 770 phi3 3B Q4_0 SYCL 99 0 2186.30 ± 1.60 2188.98 ± 2.76 33.52 ± 0.41 39.99 ± 0.18
Arc 770 qwen2 1.5B Q4_0 SYCL 99 1 4505.65 ± 15.35 4511.62 ± 4.28 40.89 ± 0.38 40.98 ± 0.33
Arc 770 llama 7B Q4_0 SYCL 99 1 1733.53 ± 1.17 1733.94 ± 0.99 30.05 ± 0.22 29.97 ± 0.23
Arc 770 phi3 3B Q4_0 SYCL 99 1 2484.85 ± 1.69 2486.82 ± 3.40 36.62 ± 0.01 36.33 ± 0.36

@github-actions github-actions bot added ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language labels Apr 10, 2025
@Alcpz Alcpz force-pushed the Alcpz/mmvq_q4_0_reorder branch from 44e199d to 9c8d809 Compare April 10, 2025 00:55
// Dispatch becomes obscure with the reorder, MMVQ when the reorder optimization
// is enabled takes precedence over DMMV, the current if-else implementation
// requires disabling DMMV if both conditions are met
|| (reorder && ggml_sycl_supports_reorder_mmvq(src0->type))) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is named for Intel GPUs.
Why change the code for CUDA?
In fact, reorder the src0 won't happen for non-intel GPU.
So this code has no impact.
Suggest remove it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your comment aligns with my suspicion that this change is obscure. This line changes the kernels from DMMV to MMVQ if reorder is enabled and it's supported, so it's no longer only for CUDA devices.

I need to rethink how the dispatcher does the work.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please ignore this comment.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another comment:
The reorder behavior impact the code path in this PR: use_dequantize_mul_mat_vec = use_dequantize_mul_mat_vec && !use_mul_mat_vec_q;

This code works well for CUDA, instead of Intel GPU.
That's why it's limited for only CUDA backend.
Some cases (models) will get benefit from it, some will become bad for Intel GPU.

I suggest removing this behavior.
Only optimize the OPs by reorder. Not change the code path.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what we have measured the new mmvq code path with the reorder optimization is more optimized on Intel devices as well (cf the PR description). Can you let us know if you find a model or device where this is causing a performance regression? That's why we suggest to enable it by default now.

Copy link
Collaborator

@NeoZhangJianyu NeoZhangJianyu Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Driver and OS only impact the performance in general.
You must care the shape of tensor.
For example, some code work well for 32 * n, but bad for 24 * n.

I test it on Linux.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification. It's hard to understand the issues you are finding because I don't fully know what you are testing. I'll try to replicate the results locally and depending on the findings see if the PR has to be split or the dispatch could be slightly improved.

Copy link
Collaborator

@NeoZhangJianyu NeoZhangJianyu Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use following cmd to test.

./examples/sycl/build.sh
./examples/sycl/run-llama2.sh

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, thanks for that. I was able to replicate the issue using an Arc 770, I'm investigating the root cause, since it seems specific of the 7XX architecture.

Copy link
Collaborator Author

@Alcpz Alcpz Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to find and address all the issues discussed above. After rebasing on top of #13003 I 've found no explicit difference between our MMVQ implementation, this PR implementation, and CUDA's mmvq implementation: For the same input we get identical outputs.

I've also checked multiple times (with and without a set seed) the output of the example scripts:

sampler seed: 0
sampler params:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
        top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 4096, n_batch = 2048, n_predict = 400, n_keep = 1

 Building a website can be done in 10 simple steps:
Step 1: Get domain and hosting
Step 2: Choose a theme
Step 3: Choose your colors
Step 4: Build your homepage
Step 5: Build your pages
Step 6: Build your blog
Step 7: Build your contact page
Step 8: Build your about page
Step 9: Add social media icons
Step 10: Add some copy
How much does it cost to build a website?
Is it easy to create a website?
What are the benefits of building a website?
How can you create a website for free?
There are many different ways to build a website, and the best way for you depends on your goals, budget, and expertise. However, there are some basic steps you can take to get started.
The first step is to choose a domain name and hosting plan. A domain name is your website’s address (e.g., www.example.com), while hosting is where your website files live on the internet. You’ll need to purchase both of these from a third-party provider.
Once you’ve got your domain and hosting, you’ll need to choose a website builder. A website builder is a platform that allows you to create and edit your website without having to know how to code. There are many different website builders to choose from, each with their own set of features and pricing plans.
Once you’ve chosen a website builder, the next step is to choose a theme. A theme is the look and feel of your website. There are many different themes to choose from, and each one will have its own set of features. You can usually find a demo of a theme to help you decide if it’s the right fit for your website.
After you’ve chosen a theme, it’s time to choose your colors. Colors can have a big impact on your website’s look and

So, I think almost all comments are addressed for this PR. I still need to rebase to cleanup the helper code I and @Rbiessy added, but it should be in a good state soon.

As for the final question about correctness due to the code path change, I think it's a question of:

Do we want to switch from DMMV to MMVQ?

CUDA for example seems to have been using the Q8_1 + fma / dp4a approach, which, yeah may sacrifice some precision, but as discussed in #11972 it seems a conscious choice (That discussion addresses a different topic, but the underlying issue is the same).
We can discuss what to do with the expected codepath in the SYCL discussion.

As for this PR, should we go with (yet another) environment variable? I'm going to advocate for MMVQ as the default path since I've only seen the precision issues in Arc770 and the performance is noticeable for text generation. In case we agree to it I'd need to add an entry in the documentation.

I'll continue running tests for other model outputs in the meantime.

@NeoZhangJianyu
Copy link
Collaborator

@Alcpz
I fix the known issues of Q4_0 reorder optimization by #13003.
You could refer to for this PR.
The result wrong issue could be fixed easily.

@Alcpz Alcpz force-pushed the Alcpz/mmvq_q4_0_reorder branch from 770eaa7 to 4f4996a Compare April 28, 2025 14:57
@Alcpz Alcpz force-pushed the Alcpz/mmvq_q4_0_reorder branch from 4f4996a to d61dda3 Compare April 28, 2025 14:57
Copy link
Collaborator Author

@Alcpz Alcpz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the PR with a tentative solution to change codepath due to the discussions we are having about model accuracy. There are some changes to review in ggml/src/ggml-sycl/ggml-sycl.cpp to accomodate for the changes in #13003

@@ -42,6 +42,7 @@ void ggml_sycl_host_free(void* ptr);

extern int g_ggml_sycl_debug;
extern int g_ggml_sycl_disable_optimize;
extern int g_ggml_sycl_prioritize_dmmv;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any input for a better name? When the variable is set the idea is that the codepath prefers DMMV for supported vecdots. We temporarily had ggml_sycl_disable_mmvq, but that name is misleading as it doesn't entirely disable the codepath.

If we agree to use an enviroment variable, I'll update the documentation accordingly

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could make this variable a bit more generic to allow operation to be performed with quantized types in general like ggml_sycl_allow_quantized_op or ggml_sycl_prioritize_quant_prec? For now it would only apply to mmvq vs dmmv but I've been wondering if this could be expanded to more operations eventually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants