-
Notifications
You must be signed in to change notification settings - Fork 19
Contribution Review Criteria
When contributing to VisionEval, the set of questions below will be asked of the contribution. These review criteria help ensure contributions to VisionEval satisfy the Goals and Objectives of the system. Make sure to review the Developer Orientation before making a submittal. Questions are categorized by review type. The automated test system also provides some helpful information where identified.
To submit a contribution for review, issue a pull request with a comment introducing your contribution. The comment should include a brief overview, responses to the questions, and pointers to related information. The entire submittal should ideally be self contained so the documentation of the methods should be in the R package as well. The Repository Manager and Review Team Chair will handle the review request, comment on each question, complete the feedback form, and reply to the pull request. Its a good idea to setup a pre-submittal meeting to discuss questions and better understand expectations. See the example submittal and review for more information.
Review Question | Test System Help | Review Team - Software | Review Team - Documentation | Review Team - Methods |
---|---|---|---|---|
1. Consistent with Design Specs. Does it contain all the elements that are required by the VisionEval system specifications? To help ease maintenance of VE, it is recommended to minimize the use of new R libraries, and to reuse R libraries used by existing VE modules when possible. | x | x | x | |
2. Valid Theory and Methods. Why is it better, and/or different than existing modules? Does it do good science and provide documentation justifying this claim? Is it consistent with good practice in strategic modeling? How might it overlap with existing modules? How does it fit within the VE ecosystem of modules? If multiple functions, documentation summarizing the functions and their variables is recommended (example submittal and review) | x | x | ||
3. Documentation. Is the module documentation complete? Does it include documentation of model estimation, algorithms, and instructions for using? | x | x | ||
4. Regional Estimation Ready (if applicable). If the module allows the estimation of regional parameters, does it provide default data, does it have clear documentation of what the estimation data needs to be and how it is to be formatted, and does it include proper data specifications to ensure that the user’s input data are correct? | x | x | x | |
5. Geography. Is it based on geographic definitions that are consistent with the model system definitions? | x | x | ||
6. Runtime. Does the module compute quickly enough and provide documentation justifying this claim? | x | x | ||
7. Complete Code and Data. Does it includes all source files and data? If a contributed module does not include all source data, it should include a minimal example data file for testing and so it is clear what data structure is needed to run the module. It should also include clear instructions on how to fetch the data and/or a clear explanation of why non-included data is confidential and contact information for data owners. | x | x | x | |
8. Non-R code (if applicable). Does the module only call R code and packages that work on all operating systems? If the code includes any non-R code (e.g. FORTRAN, C++) will that code compile on all operating systems? | x | x | ||
9. License. Is it licensed with the VisionEval license that allows the code to be freely distributed and modified and includes attribution so that the ‘provenance’ of the code can be tracked? Does it include an official release of ownership from the funding agency? | x | |||
10. Framework. Does it only interact with the computing environment by returning a properly structured list to the framework (i.e. it does not modify the global environment, does not read or write files, and only calls framework functions that are allowed)? Does it pose any security risk to the user (i.e. if applicable, it uses unsecured methods for authentication?) | x | x | ||
11. Pass Automated Tests. Does it include regression tests to enable checking that consistent results will be returned when updates are made to the framework and/or R programming environment? | x | x | ||
12. Sufficient Automated Tests. Does it include sufficient test coverage and test data? Does it pass the ‘testModule’ test which validates that it will run correctly in the model system? If possible, does it test against the existing RVMPO example? | x | x | ||
13. Other. Any other comments? Such as implementation issues (e.g., impact on other modules or the VisionEval framework, changes to automated testing, confidential or oversized data issues), or suggestions for changes to improve developer experience with VisionEval. | x | x | x |
The Review Team will provide feedback for each review criteria above and tag each submittal category as follows:
Status | Software | Documentation | Methods |
---|---|---|---|
Accept | |||
Accept but recommend revisions | |||
Do not accept | |||
Abstain |
- Getting Started
- VisionEval Models
- VERPAT Tutorial
- VERSPM Tutorial
- VE-RSPM Training
- Developer Orientation
- Goals and Objectives
- Working Together
- Automated Testing
- Contribution Review Criteria
- Modules and Packages
- Development Roadmap
- Documentation Plan
- Multiple Scenarios