You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There currently seems to be no benchmarks, which may make finding potential optimizations harder.
Benefits of having benchmarks
Adding benchmarks for some self-contained components, such as the A* pathfinder, could give a clearer picture of the performance improvements/regressions with changes and better guide any optimization attempts. As an example of those potential benefits, the issue #442 suggests that A* pathfinding implementation could be improved, but there is no way to reliably measure performance gains. Adding benchmarks would make it easy to see the effects of any change.
Drawbacks
Adding benchmarks may require depending on more crates. While those dependencies would be development-only, it is still something to consider. Additionally, while not very hard to maintain, benchmarks would require changing if APIs they test change.
Proposed solution
Adding benchmarks with criterion. It gives a high quality overview of changes in performance (with detailed graphs containing multiple samples and showing any outliers), and is pretty easy to use. In order to test how much effort would be required, I had already written a simple benchmark for A* pathfinding with criterion. The entire benchmark fit in under 90 LOC, and was simply a modification of the relevant test. It seems like a lot of benchmarks can be done similarly, requiring little to no effort. Some changes are obviously still required, but they are mostly minor (e.g. changing random generators to seeded ones for consistency). I could write benchmarks for other parts of the engine, which are easy to test standalone(e.g. the rest of algorithms in fyrox::utilis), if the added dev-dependency on criterion would not be an issue.
The text was updated successfully, but these errors were encountered:
Issue
There currently seems to be no benchmarks, which may make finding potential optimizations harder.
Benefits of having benchmarks
Adding benchmarks for some self-contained components, such as the A* pathfinder, could give a clearer picture of the performance improvements/regressions with changes and better guide any optimization attempts. As an example of those potential benefits, the issue #442 suggests that A* pathfinding implementation could be improved, but there is no way to reliably measure performance gains. Adding benchmarks would make it easy to see the effects of any change.
Drawbacks
Adding benchmarks may require depending on more crates. While those dependencies would be development-only, it is still something to consider. Additionally, while not very hard to maintain, benchmarks would require changing if APIs they test change.
Proposed solution
Adding benchmarks with criterion. It gives a high quality overview of changes in performance (with detailed graphs containing multiple samples and showing any outliers), and is pretty easy to use. In order to test how much effort would be required, I had already written a simple benchmark for A* pathfinding with criterion. The entire benchmark fit in under 90 LOC, and was simply a modification of the relevant test. It seems like a lot of benchmarks can be done similarly, requiring little to no effort. Some changes are obviously still required, but they are mostly minor (e.g. changing random generators to seeded ones for consistency). I could write benchmarks for other parts of the engine, which are easy to test standalone(e.g. the rest of algorithms in
fyrox::utilis
), if the added dev-dependency on criterion would not be an issue.The text was updated successfully, but these errors were encountered: