You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should possibly consider preparing a model output test for some of our base case settings so as large model changes are introduced, we can monitor and measure drift for key outputs such as prevalence, incidence, mortality, etc.
I imagine this would take the form of a small population with several model iterations where we capture the mean and std_dev for a tagged version release (which would be historically saved for each release. As we prepare a new release, these same models should be run using the updated codebase and statistically verify that we are not drifting or deviating. We cant just use raw integration tests as random calls will get disrupted as the model functions change.
The text was updated successfully, but these errors were encountered:
We should possibly consider preparing a model output test for some of our base case settings so as large model changes are introduced, we can monitor and measure drift for key outputs such as prevalence, incidence, mortality, etc.
I imagine this would take the form of a small population with several model iterations where we capture the mean and std_dev for a tagged version release (which would be historically saved for each release. As we prepare a new release, these same models should be run using the updated codebase and statistically verify that we are not drifting or deviating. We cant just use raw integration tests as random calls will get disrupted as the model functions change.
The text was updated successfully, but these errors were encountered: