Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration Test for Reproducibility #56

Open
MaximilianKing opened this issue Mar 6, 2020 · 0 comments
Open

Integration Test for Reproducibility #56

MaximilianKing opened this issue Mar 6, 2020 · 0 comments
Labels

Comments

@MaximilianKing
Copy link

We should possibly consider preparing a model output test for some of our base case settings so as large model changes are introduced, we can monitor and measure drift for key outputs such as prevalence, incidence, mortality, etc.

I imagine this would take the form of a small population with several model iterations where we capture the mean and std_dev for a tagged version release (which would be historically saved for each release. As we prepare a new release, these same models should be run using the updated codebase and statistically verify that we are not drifting or deviating. We cant just use raw integration tests as random calls will get disrupted as the model functions change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant