Performance Testing Strategy #1169
Replies: 4 comments 6 replies
-
Perf testingTorchbench analysisTorchbench : https://github.com/pytorch/benchmark (benchmarks both training and inference performance)
Models of interest currently in Torchbench:
List of all models : https://github.com/pytorch/benchmark/tree/main/torchbenchmark/models The above models are directly inherited from torchvision or timm which is same as what we are doing in our perf benchmark today Integrating Torch bench pain pointsThere won't be any code in https://github.com/pytorch/TensorRT/tree/master/tools/perf. Instead the workflow would look like
AlternativeWe can expand on our perf utility with the same model list as used by Meta. More control and hermetic. We can add fx2trt as a backend to our existing utility. @ncomly-nvidia @narendasan Let me know your thoughts or any missing items we need to explore. |
Beta Was this translation helpful? Give feedback.
-
Then is the plan to submit PRs to TB to add such functionality?
By latest is that PyT master or the PyT main?
What are some trade-off of just using the TB models in our perf utility, instead of modifying TB directly? Can we characterize TB perf / evaluation vs our per utility's? |
Beta Was this translation helpful? Give feedback.
-
Add TB PoC @xuzhao9 for some discussions. A good question for @xuzhao9 , does TB consider roll out with release tag? I think it is good to maintain our own benchmark as it is controllable and debug-able. It could be a playground for our users to have a quick try without installation of TB. |
Beta Was this translation helpful? Give feedback.
-
@dheerajperi can you please update current status? Integration w/ Torchbench, stand-alone bm tool, etc. |
Beta Was this translation helpful? Give feedback.
-
Performance Testing Strategy for Torch-TensorRT
Questions needed answered:
@dheerajperi to update this ticket with the answers to the above by 7/14
Beta Was this translation helpful? Give feedback.
All reactions