Skip to content

Sync Sub-channel quantized type from llvm-project (pre-merge) to npu-plugin #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: npu/release/18.x
Choose a base branch
from

Conversation

dbudii
Copy link

@dbudii dbudii commented Feb 25, 2025

Summary

  • Trim implementation details to include only what is necessary for npu-plugin
  • Remove the limitation on negative scales to align with the other quantized data types
  • Add documentation for a better understanding of current and new quantized data type

The original work from llvm-project is nearing completion: llvm/llvm-project#120172

EISW-156316

Related PR in NPU Compiler and/or OpenVINO repository with sub-module update

  • PR-xxx

Other related tickets

List tickets for additional work, eg, something was found during review but you agreed to address it in another Jira

  • E-xxxxx

…plugin-llvm

- Trim implementation details to include only what is necessary for npu-plugin
- Remove the limitation on negative scales to align with the other quantized data types
- Add documentation for a better understanding of current and new quantized data type
@dbudii dbudii requested a review from a team as a code owner February 25, 2025 12:32
module @parseUniformSubChannel attributes {
// CHECK: !quant.uniform<i8:f32:{0:1,1:2}, {{\{}}{2.000000e+00:10, 3.000000e+00:20}, {4.000000e+00:30, 5.000000e+00:40}}>
bytecode.test = !quant.uniform<i8:f32:{0:1, 1:2}, {{2.0:10, 3.0:20}, {4.0:30, 5.0:40}}>
} {}
Copy link
Author

@dbudii dbudii Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update this with the next changes

];
}

#endif // QUANT_BYTECODE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

eof

@dbudii
Copy link
Author

dbudii commented Feb 25, 2025

Validation looks good
Screenshot 2025-02-25 141724

@@ -14,6 +14,7 @@
#include "mlir/IR/Diagnostics.h"
#include "mlir/Support/LogicalResult.h"
#include "llvm/ADT/APFloat.h"
#include "llvm/ADT/STLExtras.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary?

@dbudii
Copy link
Author

dbudii commented Feb 26, 2025

Tests akin to the following need to be integrated:

!qalias = !quant.uniform<u8:f32:{0:1,1:2},
    {{2.000000e+02:120,9.987200e-01:127}, {2.000000e+02,9.987200e-01}}>
func.func @sub_channel_quantization(%arg0: tensor<2x4xi8>) -> tensor<2x4xi8> {
  %0 = quant.scast %arg0 : tensor<2x4xi8> to tensor<2x4x!qalias>
  %1 = quant.dcast %0 : tensor<2x4x!qalias> to tensor<2x4xf32>
  %2 = quant.qcast %1 : tensor<2x4xf32> to tensor<2x4x!qalias>
  %3 = quant.scast %2 : tensor<2x4x!qalias> to tensor<2x4xi8>
  return %3 : tensor<2x4xi8>
}

I don't see similar tests for other quantized data types in our fork. Would we want these tested only for npu-plugin custom quantize operations? @ZoranZomborat @hrotuna @nikita-kud

@nikita-kud
Copy link
Contributor

I expect it will be much harder to upgrade LLVM to version 19/20. If we really need it, could you put some effort into updating LLVM first?

@ZoranZomborat
Copy link
Contributor

I expect it will be much harder to upgrade LLVM to version 19/20. If we really need it, could you put some effort into updating LLVM first?

Unfortunately we can't take the upstream effort ourselves, not with current task prioritization;

Copy link
Contributor

@nikita-kud nikita-kud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but I have to "block" the PR until we clarify some questions.

I really don't that we can merge:

  • something that's more than one release cycle ahead of us. I mean I would not mind to merge something into 18.x branch from 19.x branch, or something from main to 20.x branch(latest release branch right now)
  • something that is not even a part of LLVM main branch (Sub-channel quantized type implementation llvm/llvm-project#120172 is not merged yet)

I understand this brings inconvenience, but it just means that the team should pay more attention to the LLVM integration issue. And we need to address it properly, not make it worse.

@ZoranZomborat
Copy link
Contributor

Alright, let's figure out all the opens here;

With regards to taking some work on LLVM repo which is not yet merged, it's still better then attempting our own solution and paying a bigger price later when we update to LLVM 21.
From what I've read in the LLVM topics for extending/refactoring quant dialect as well as extending the APFloat semantics for more custom formats, the proposal here falls in line with the LLVM community direction;

FYI we plan to also deprecate QuantileQuantization class in favor of using QuantileUniform https://jira.devtools.intel.com/browse/EISW-158454

While there may still be some changes inbound for llvm/llvm-project#120172 until it's merged; we can help with the future LLVM updates when it comes to Quant dialect; But we can't overtake the full upstream effort

// [1][2] and [1][3] use scale `s11` and zero point `z11`,
// [2][0] and [2][1] use scale `s20` and zero point `z20`,
// [2][2] and [2][3] use scale `s21` and zero point `z21`,
tensor<3x4x!quant.uniform<i8:f32:{0:1, 1:2},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fits perfectly your usecases for GPTQ quantization;

For a weights set of 4096x1024 where the first dimension has a group size of 128;
We can represent both as the unrolled case of 32x128x1024 with !quant.uniform<i8:f32:{1:2, 1:1}, ...
or as the aggregate case of 4096x1024 with !quant.uniform<i8:f32:{0:1, 32:1}, ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants