Replies: 6 comments 2 replies
-
Apologies for talking obliquely here - I'll respond to your points more
directly later today.
I would prefer that packages in "testing" be accepted more freely than they
are now, and "stable" be reserved for those which have been sufficiently
validated while on the testing branch.
Currently it seems difficult for things to even reach the testing branch,
possibly because it is being used as most devs' primary branch.
I would also prefer bug fixes to reach stable much more quickly than
feature updates.
These seem like reasonable goals. :)
…On Thu, 21 Jan 2021, 15:27 raisjn, ***@***.***> wrote:
Currently, the way in which PRs are approved is not well documented. Until
now, the expectation (on my part) has been: when merging into testing, the
PR will get a look over to make sure it fits toltec guidelines and the
package will get tested to make sure it installs properly. When doing
weekly merge into stable, the packages get a more thorough e2e testing -
and any packages with obvious software and functionality bugs will have to
get addressed before the merge is accepted.
After talking with others, I realize we aren't all on the same page, and
some maintainers have an expectation that software testing will happen
during the testing PR. I've also seen the word "integration testing" used,
to explain that at stable merge time, the packages are tested alongside
each other to make sure they work.
What should the expectations on approval process be?
To highlight some reasons I bring this up:
- i want to see more test plans in PRs, so i know what to test
- i want to see people mention explicitly when they test a package
instead of an unspoken assumption on whether a package was tested
- why is toltec doing software testing in the first place? notably,
the contributors to toltec are also software authors, so they are wearing
many hats: software author, package maintainer and quality assurance for
other people's packages
- i think more people should be on the hook for testing during stable
merge
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#20>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB4T3F375ABKRVTB55RNAB3S3BBXXANCNFSM4WNBPYXA>
.
|
Beta Was this translation helpful? Give feedback.
-
Both arguments sound pretty good. I probably usually "over-test" software as well but also spotted numerous bugs because of that which validated the effort for me again.
I would like this as well, as this would lessen the testing for testing. But I'm also torn here. I wouldn't want a bugged software update to accidentally get merged into testing. Some people do rigorous testing of their sw before merging into toltec, some less so and some bugs just arise from the fact, that the author usually has a slightly different setup or tests features in isolation instead of all at once. I initially thought of the stable branch for having software that people on the testing branch could "battle-test" and find out any software bugs that could arise over time. But currently it seems to me like a second phase of testing like already done with maybe more package interoperability verification. We really need to have a clear statement as what exactly gets tested before merging into testing as stated by raisjn. We should also have some rules whether a package needs a rM1 and/or a rM2-Approval. Currently this more or less doubled all my testing, since I usually now test on both devices to be sure which usually only serves to catch software bugs and not packaging problems. |
Beta Was this translation helpful? Give feedback.
-
OK, it seems like my original comments weren't too oblique after all! In terms of a specific policy, what about something like this?: PRs to :testing are accepted with minimal fuss (a cursory check, though maybe requiring sponsorship from a toltec-dev member if the project/contributor are new?); and then once accepted to testing, a PR may be made right away for merge into :stable. The PR that gets made to :stable is the one that should be scrutinised carefully and may take an arbitrarily long time before being accepted. If bugs are found in testing, the :stable PR should be rejected and a new one created for the new version of the package which is pushed to testing. I also think that use of testing as an upstream repository should be discouraged as much as possible. |
Beta Was this translation helpful? Give feedback.
-
PRs to :testing are accepted with minimal fuss (a cursory check, though maybe requiring sponsorship from a toltec-dev member if the project/contributor are new?); and then once accepted to testing, a PR may be made right away for merge into :stable. That's a good point. This way we could have same times for package to go from testing to stable and allow the same time of day-to-day testing for people who run on testing. It would also increase the PRs we already have though but I really like it. Idea:
If sucessful, a package gets merged into testing and a PR to move into stable is created right away! Requirements for stable:
This would mean that each package would have at least (in this case) 7 days to find potential bugs by people using the software which can find rarer bugs. It would also though clutter the repo with more pending PRs (e.g. labelelled "[To Stable] Package X M.m.p-c -> M.m.p-c'" or similar). We also would probably have more problems on how to deal with potentially existing stable prs when a new version comes out before merging to stable (should we restart the time, seperate the versions somehow or not reset the time?). |
Beta Was this translation helpful? Give feedback.
-
Note that it is already the case that the I think @raisjn’s initial point was that we need to make our workflow more explicit as to what testing effort is expected at each step of the process, not necessarily that we need to overhaul the process itself. |
Beta Was this translation helpful? Give feedback.
-
yes, my original question about level of testing expected at each branch PR review, but it seems that the purpose of regarding testing: to be up front: as a human, i do not like doing QA for other people's software. i feel it is punishing me for other people's bugs by having to run manual tests over for each bug. if everyone had same amount of bugs, then it would be fine, but that's usually not the case. and as linus mentioned: people have different amount of QA they do before they put up a PR. if someone didn't do QA on their code and expect me to do it, they are not correctly valuing my time. for the above reasons, i want to 1) reduce manual QA work i have to do, 2) make test plans clear so people know what has been tested by the maintainer and what has been tested by the reviewer. as a developer, i highly value velocity and lean towards dixonary's viewpoint: make it easy for dev to get their code into a public repo that can be tested by end user. as a maintainer, i highly value stability and keeping bug free experiences for people. as a person, my goal is to expand the rM ecosystem and make it easy for devs to build and publish their apps (and hotfixes) and for people to install them and use them. it would be nice if we come up with a branch arrangement that fits the seemingly conflicting goals. UPDATE: it would be nice add automated testing. i think pl-semiotics was able to get qemu working on an arm image and run xochitl inside it. we could do something similar, then write tests that inject key presses / button presses and verify behavior (either in the framebuffer or process tree) |
Beta Was this translation helpful? Give feedback.
-
Currently, the way in which PRs are approved is not well documented. Until now, the expectation (on my part) has been: when merging into testing, the PR will get a look over to make sure it fits toltec guidelines and the package will get tested to make sure it installs properly. When doing weekly merge into stable, the packages get a more thorough e2e testing - and any packages with obvious software and functionality bugs will have to get addressed before the merge is accepted.
After talking with others, I realize we aren't all on the same page, and some maintainers have an expectation that software testing will happen during the testing PR. I've also seen the word "integration testing" used, to explain that at stable merge time, the packages are tested alongside each other to make sure they work.
What should the expectations on approval process be?
To highlight some reasons I bring this up:
Beta Was this translation helpful? Give feedback.
All reactions