-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated changeset testing #44
Comments
It sounds awesome, and I think I would use it. |
Assuming that we implemented the functionality in the issue description, would we also want to rewrite/adapt Bayes CT to leverage it? That is, shouldn't our default automated testing leverage some of those features, such as (a) running minimal required tests (based on understanding the dependencies) and (b) identifying visual diffs? |
Yes, presumably we'd share whatever code would be desired.
Yes, see #22. This would need design for how it should show up when there is a visual difference. |
Not something that I've encountered a need for. But if it's something that others need, and the benefit/cost ratio is high enough... |
The proposed solution requires pushing the code to branches--why not check out a 2nd working copy, so you can run tests on the 1st working copy while developing in the 2nd working copy? Then we won't have to work out any client-server protocol, etc. and you can test changes that haven't been pushed at all. This approach will require approximately 0 investment and is something you can use right away. The main disadvantages that I see are (a) could be confusing to switch back and forth between two working copies and (b) it takes more disk space. But I think we could figure out how to deal with (a) after getting a little experience with this as a strategy. |
I'm not concerned about disk space. Switching between working copies sounds inconvenient, so much that I'd prefer pushing to a branch (even locally) so I could check out the "testable" point on my 2nd working copy. Also offloading the computational load to a server would be nice, since the more "comprehensive" testing that I'd like would take up a lot of time (and depending on the development device, might slow down editors). It's probably worth implementing this (local) style first, and adding on the server/client bit if it's worth it. |
If we go with the approach of using a branch, +1 to keep the branch local, or doing whatever is necessary to avoid an explosion of branches. |
I'm not sure anything would cause an explosion of a branch, because it would typically look like:
|
7/12/18 dev meeting: |
@ariel-phet no progress on this since 7/12/2018. How should we proceed? |
Although this kind of feature would like be "nice to have" people have been getting by with our current tools. Considering that the issue is basically 2 years old, and we have not had time or pressing need to work on it, it feels appropriate to close. |
There are many times where I've been waiting for local testing (aqua, unit tests, snapshot comparison) to complete before pushing local commits to master. It's been inconvenient, since it prevents me from starting code changes until the testing is complete.
I'd like to consider something like the following (very open to modification):
master
and commits on a feature branch.I'm not sure how important some complexity would be (e.g. "only run tests for area-model sims"), but it would be possible to start with a simple interface and add anything needed.
Tagging for developer meeting to discuss if this would be helpful for others, priorities, features, etc.
The text was updated successfully, but these errors were encountered: