-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Includes all deferred conversion costs in benchmarks #34
Includes all deferred conversion costs in benchmarks #34
Conversation
…fully evaluate so benchmark includes all costs.
@@ -489,7 +502,7 @@ def bench_shortest_path(benchmark, graph_obj, backend_wrapper): | |||
node = get_highest_degree_node(graph_obj) | |||
|
|||
result = benchmark.pedantic( | |||
target=backend_wrapper(nx.shortest_path), | |||
target=backend_wrapper(nx.shortest_path, force_unlazy_eval=True), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be nice to be able to run this benchmark with both force_unlazy_eval=True
and force_unlazy_eval=False
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be nice to be able to run this benchmark with both
force_unlazy_eval=True
andforce_unlazy_eval=False
.
Yes I agree that's useful. I'll add a FIXME
/issue to parameterize that. FWIW, here's the results with/without when I ran locally:
Fortunately run times don't have a noticeable change until the hollywood (57M edges) or larger datasets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…2-benchmarks_update
…lratzel/nx-cugraph into branch-24.12-benchmarks_update
/merge |
This refactors the helper that returns the callable to benchmark to fully expand Mapping return types. This is needed in case those return types lazily convert values from device to host, so the complete cost can be included in the measured/benchmarked run. This is similar to exhausting returned generators that deferred computations in NetworkX, and doing this keeps us consistent with benchmarking the full compute and/or conversion costs for both libraries.
I changed the option name from
exhaust_returned_iterator
toforce_unlazy_eval
since this is no longer being used only for iterators, but I'm taking suggestions for a better name.