-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PROPOSAL & DISCUSSION] Organization-wide Testing Requirements #135
Comments
I am going to go ahead and get this ball rolling. Some things I would be interested in seeing are:
|
To help establish guidelines, I am going to start going through each repository and reviewing their testing framework and confirming their current requirements. I will then keep a running document with each of the repos' requirements listed. From there, I will synthesize a set of requirements that should be established. |
|
Follow-up from: SDK, SQL, Search Relevance, Dashboards Anywhere, OpenSearch-js, OpenSearch, OpenSearch Net, k-NN, Neural Search, OpenSearch Dashboards, Anomaly Detection, Benchmark, Build, helm-charts, opensearch-dashboards-functional-tests, OUI, ansible, Search Processor Repositories Filed: 60 |
What do the outcome of this proposal look like, is it a process, a tool, or a product feature? How would we weight passible testing requirements from great ones? If we had those requirements and executed on them how would the project be better for it? |
Hi @peternied, thank you for following up. The purpose of this discussion is to finalize a set of testing requirements that all development repositories will be required to maintain. This will be in the form of guidelines on the Developer Guide that state that repositories must require "x, y, and z" to have their code merged. Likewise, changes for the repositories will be expected to have those tests pass before they are merged. The hope is that this post will garner feedback (either here or directly) on what requirements people would like to see and then we can go from there. As things stand there is a very wide range of requirements for testing across the organization. If Repository A requires code coverage of 85% and runs unit, integration, and backwards compatibility tests, they may expect Repository B does the same. However, because there is no standardized bar, Repository B instead has 98% code coverage but only has unit and integration tests. This causes a non-standard quality bar that creates a poor release experience and has resulted in features being released that don't meet user expectations. You bring up a really good point in what the requirement could look like. The plan is to include it in documentation and require that the development repositories implement the requirements. That being said, a tool or GitHub workflow the checked the repository states could be helpful in keeping everyone accountable. The idea is not to be an auditor for repositories but instead to help ensure we all are on the same page. I spoke with quite a few people about what they would like to see different with releases and the quality of the changes was a recurring topic. |
Thanks @scrawfor99 for driving this. To summarize, but also to make sure I understand: The problem is uneven quality bar across the repositories in our organization Does this capture what you're thinking too? |
Hi @davidlago, you are exactly correct. I am a bit busy this week but am hoping to be able to file the same type of documenting issue for all repositories and then make a small, easy to digest analysis. After seeing where things are I will take further steps along the lines of my response to Peter and what you mentioned in your comment. |
Hi @scrawfor99, I see BWC confirmation being added to dashboards plugin repo issues. Do we have BWC for dashboards plugins? I checked https://github.com/opensearch-project/opensearch-plugins/blob/main/TESTING.md#backwards-compatibility-testing but it seems only for opensearch plugins. |
Hi @joshuali925, thank you for following up on this discussion. You are correct. To my knowledge there are no BWC tests for dashboards-plugins. The section in the table for BWC can effectively be ignored for dashboards plugins since it seems unlikely that that would ever be an expectation. It is not clear what backwards compatibility would entail for the frontend focused repos. It is in the table for consistency and readability. |
Statistics based off of findingsThis table provides the presence metrics for each of the tests documented in this process.
|
TL;DR: What do you want the organization to use as testing requirements? What certifies something is "done?"
What are you proposing?
A major topic discussed at the milestone meetings hosted by @CEHENKLE, has been establishing a universal quality bar. An actionable mechanism for the this is using GitHub actions and checks to assert changes meet expectations.
This issue should be used as a discussion board for what these expectations are. Everyone across the @opensearch-project, is encouraged to contribute as this is a change that will affect us all. The final result of this discussion will be a "universal" set of expectations which all repositories will employ. This means that maintainers will have an established quality bar for merging changes, and that contributors can know all requirements before they start working.
What users have asked for this feature?
The milestone group worked with numerous repositories to learn their pain points--the things that were preventing them from releasing on time and making a great product. One of the consistent patterns was a lack of established expectations. One group wanted changes to have "a, b, and c," while another wanted "x, y, and z" This proposal seeks to establish a set of accepted expectations across all repositories.
What is the contributor experience going to be?
Under the new system, a contributor will be able to reference the
CONTRIBUTING.md
document of any @opensearch-project repository and see the repository's expectations. Most of these expectations will be standardized across the organization but there may be additional requirements depending on each repositories use case. For example, the Security repository may require that the plugin install workflow passes but this would not be required for the whole organization.This discussion establishes a base quality bar across the organization. If any repository wants to add further requirements specific to them that is encouraged. They will need to list the additional requirements alongside the standardized ones in their
CONTRIBUTING.md
.How does this impact flaky tests?
Currently, we are trying to reduce the number of flaky tests which exist in the project. Unfortunately, this is easier said than done and the process is ongoing. Part of this discussion should be to establish expectations for handling existing flaky tests and it is likely that the new quality bar will expect no more flaky tests are introduced. A method of doing this is running all new tests in a separate workflow which is isolated from the older tests. This prevents any currently flaky tests from impacting newly added changes.
What will it take to implement?
In order for repositories to adopt the new quality bar, they should update their
CONTRIBUTING.md
document and their testing frameworks. Since the expectations are being standardized, it should be relatively easy to port between repositories. If tests are written as GitHub Actions they can likely be directly copied with minimal changes.Examples of some of the types of tests that may be helpful can be found in the identity project which has workflows for testing a wide range of functions on commit. There may be many more tests that should be mentioned however, and expectations can also include things like updated developer guides.
Any remaining open questions?
We are still trying to determine what these expectations should be so it is important that all opinions are heard. If you have any comments, concerns, or suggestions please share them below.
The text was updated successfully, but these errors were encountered: