-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Thunderfx report #1636
base: main
Are you sure you want to change the base?
[WIP] Thunderfx report #1636
Conversation
df71313
to
49e72b4
Compare
49e72b4
to
3c5d713
Compare
Hi @mruberry @IvanYashchuk @kshitij12345 , I’ve prepared an initial version of the report, which you can find in the PR description. Could you let me know if this aligns with the direction you had in mind? Additionally, I’d appreciate any suggestions you have regarding the report's content, thank you |
fyi @riccardofelluga, too, since he's had some experience with benchmarking |
Super cool! A few notes and questions:
For this PR, let's focus on clarifying this small issues and merge the For the next PR, I think it'd be useful to create an FXReport object. What I'm thinking is:
So someone could do something like:
And that would generate four reproducers for each of the FX graphs in the program, each one using the different requested executor. Writing the reproducers directly like this may not be super interesting in isolation, but once that's working we can add more of the data that What are your thoughts, @kiya00? |
Hi @mruberry , thank you so much for your detailed suggestion!
yes, that's the naming, correspond to the name saved in the subgraph_info
This part comes from the custom_builtins that torch uses,
It depends on if we can convert any executor code into a corresponding string, I'm not very familiar with it, is it possible if I simply use That's a very great suggestion about creating FXReport/FXGraphReport object!
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a pair of nits but for the rest it looks good to me! One thing I would add is printing the output folder path such that it's easier for users to go and fish out the reproducer files.
On a more technical note, is there a way to make sure that the memory reports are accurate? I've tested it a bunch and they seem to be correct, tho sometimes with repeated tests one might interfere with the other. Maybe we can add a little warning of some sort
def save_reproducer_to_folder(self, reproducer_folder: str | PathLike, use_pytest_benchmark: bool = False): | ||
def save_reproducer_to_folder( | ||
self, | ||
reproducer_folder: str | PathLike, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There does not seem to be any check to validate the folder string passed here, maybe it's better to add it here or later in reproducer
function since it's user facing
bench_executors_dict["thunder_cugraph"]=partial(thunder.jit, transform=CUDAGraphTransform()) | ||
""" | ||
|
||
COMMAND_LINE_ARGS = f""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for f as there is no variable included in the string
COMMAND_LINE_ARGS = f""" | |
COMMAND_LINE_ARGS = """ |
print( | ||
f"Failed to run the benchmarking script({benchmark_folder / file}), exception: {benchmark_result.stderr}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to add a file cleanup in case of failure?
Sounds good.
No worries, that sounds great.
Great question! So it is, in general, difficult to convert a Python callable into a string representing how to call that callable. Using a fixed set of executors that you know how to print is the right thing to do. Passing in a name to select which to run is a good start. In the future we can maybe let developers specify their own strings, too (per my comment above).
That's right! |
Before submitting
What does this PR do?
1. Adds
thunderfx_report
APIthunderfx_report(fn: Callable, *args, compile_kwargs: dict = None, folder_path: str | PathLike = "/tmp/thunderfx_report", check_consistency: bool = True, check_benchmark: bool = True, save_consistency_inputs: bool = False, save_benchmark_inputs: bool = False, **kwargs,)
, itthunderfx
fails, saves the repro scripts if possiblecheck_consistency=True
, checks the results ofthunder.jit
(withcompile_kwargs
) with eager, the scripts are saved inconsistency
folder, the inputs are saved in.pt
file ifsave_consistency_inputs=True
else the random tensors are usedcheck_benchmark=True
, the benchmark scripts are saved inbenchmark
folder with the option to save the input tensors.Example
The saved file structure(when
save_benchmark_inputs=True
):The report is displayed on the screen as follows (there are 2 FXgraphs: graph0_thunder_0, graph1_thunder_0):
Content of the report
2. Modifies the reproducer script to have command line options
save_reproducer_to_folder(self, reproducer_folder: str | PathLike, use_pytest_benchmark: bool = False, save_input_tensor=False,)
can be used to save benchmark/reproducer script with random or dumped pt input tensors.The reproducer script has configurable options, the default one is
--check_consistency=False --compute_type=forward --executor=thunder
The repro script example
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃