You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the project evolves and we add more features it will be important to continually check that we aren't regressing on performance metrics. This issue can be used to discuss what we consider important performance metrics, here are a few that have been discussed previously:
C20, this seems to be used externally as a benchmark molecule
DFT throughput, e.g. number of single point DFT evaluations we can perform per second per chip
Beyond this we may want some microbenchmarks for performance critical steps in the SCF algorithm (e.g. eigensolve, evaluation of ERI)
Ideally we would track this continuously for every PR so we are getting feedback closer to the time when possible regressions are introduced. This may require we run these checks on IPU hardware. Further requirements to be scoped as part of addressing this issue.
The text was updated successfully, but these errors were encountered:
As the project evolves and we add more features it will be important to continually check that we aren't regressing on performance metrics. This issue can be used to discuss what we consider important performance metrics, here are a few that have been discussed previously:
Beyond this we may want some microbenchmarks for performance critical steps in the SCF algorithm (e.g. eigensolve, evaluation of ERI)
Ideally we would track this continuously for every PR so we are getting feedback closer to the time when possible regressions are introduced. This may require we run these checks on IPU hardware. Further requirements to be scoped as part of addressing this issue.
The text was updated successfully, but these errors were encountered: