-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Print delegation info in export_llama in verbose #7803
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7803
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -816,6 +818,13 @@ def _export_llama(args) -> LLMEdgeManager: # noqa: C901 | |||
|
|||
builder = builder.to_executorch(passes=additional_passes) | |||
|
|||
if args.verbose: | |||
graph_module = builder.edge_manager.exported_program().graph_module |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The EdgeManager
is modified inplace, so after to_executorch
it inserts memory.alloc
nodes for out variant ops. These memory.alloc nodes are not generated in the final PTE file but only used for memory allocation for out tensor of out-variant ops, so the delegation count given here will be inaccurate. I think we should print this right after to_backend
above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. Let me update it. Thanks @tarun292 !
020b0f6
to
ce4c066
Compare
@pytorchbot label "topic: not user facing" |
ce4c066
to
48b6c76
Compare
48b6c76
to
0e27a3b
Compare
@pytorchbot merge |
Mergebot is not configured for this repository. Please use the merge button provided by GitHub. |
## Context #7803 added an import to export_llama but did not add it to the buck target. Differential Revision: [D68716129](https://our.internmc.facebook.com/intern/diff/D68716129/) [ghstack-poisoned]
## Context #7803 added an import to export_llama but did not add it to the buck target. Differential Revision: [D68716129](https://our.internmc.facebook.com/intern/diff/D68716129/) ghstack-source-id: 263234883 Pull Request resolved: #7963
## Context #7803 added an import to export_llama but did not add it to the buck target. Differential Revision: [D68716129](https://our.internmc.facebook.com/intern/diff/D68716129/) ghstack-source-id: 263234883 Pull Request resolved: #7963 Co-authored-by: Stephen Jia <[email protected]>
Co-authored-by: Martin Yuan <[email protected]>
## Context #7803 added an import to export_llama but did not add it to the buck target. Differential Revision: [D68716129](https://our.internmc.facebook.com/intern/diff/D68716129/) ghstack-source-id: 263234883 Pull Request resolved: #7963 Co-authored-by: Stephen Jia <[email protected]>
Co-authored-by: Martin Yuan <[email protected]>
## Context pytorch#7803 added an import to export_llama but did not add it to the buck target. Differential Revision: [D68716129](https://our.internmc.facebook.com/intern/diff/D68716129/) ghstack-source-id: 263234883 Pull Request resolved: pytorch#7963 Co-authored-by: Stephen Jia <[email protected]>
Summary
For better dev and debugging experience, add delegation info prints in verbose in export_llama. It only shows up with
-v
option.Test plan