Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NPU Engine #31

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open

Add NPU Engine #31

wants to merge 11 commits into from

Conversation

szeyu
Copy link
Contributor

@szeyu szeyu commented Sep 5, 2024

Add NPU Engine #28

Reference : Phi-3 Cookbook - Intel NPU acceleration library

image

Update in:

  • modelui.py
  • engine.py
  • setup.py
  • README.md

Add:

  • npu_egnine.py
  • requirements-npu.txt
  • npu_models.md

@szeyu szeyu added the type: enhancement / feature New feature or request label Sep 5, 2024
@tjtanaa tjtanaa linked an issue Sep 25, 2024 that may be closed by this pull request
@tjtanaa
Copy link
Contributor

tjtanaa commented Sep 25, 2024

@szeyu The NPU support looks good. I added some reviews. If those are resolved then it should be go to be merged.

Before merging this PR, can you first merged the OpenVINO vison model PR then only merge this NPU Engine?

@szeyu szeyu linked an issue Sep 26, 2024 that may be closed by this pull request
merge npu
merge npu
merge npu
merge npu
merge npu
merge npu
merge npu
merge npu
Copy link
Contributor

@tjtanaa tjtanaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added some comments, please take a look and let me know if you have any thoughts or questions.

@@ -56,6 +56,16 @@ def __init__(self, model_path: str, vision: bool, device: str = "xpu", backend:

self.engine = OnnxruntimeEngine(self.model_path, self.vision, self.device)
logger.info(f"Initializing onnxruntime backend ({backend.upper()}): OnnxruntimeEngine")

elif self.backend == "npu":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you find a way to detect if this is Intel or AMD machine?

If user specify npu device, you have to check if it is Intel or AMD first.
> If the machine is Intel, then you continue to load model using intel_npu_engine.py.
> If the machine is AMD, then you throw error message saying that NPU support on AMD platform is not supported yet.

@@ -0,0 +1,268 @@
import contextlib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rename npu_engine.py into intel_npu_engine.py as this is NPU code for Intel only?

Do DM me on Whatsapp to discuss about this if you think otherwise.

@tjtanaa
Copy link
Contributor

tjtanaa commented Oct 6, 2024

@szeyu can you also resolve the conflicts?

@szeyu
Copy link
Contributor Author

szeyu commented Oct 6, 2024

@szeyu can you also resolve the conflicts?

resolved conflict

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement / feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEAT] Add NPU Engine
2 participants