Description
This "issue" is meant for discussion. We should agree upon an approach here before starting any coding.
#85 brings up an issue I wanted to open anyway (just didn't find the opportunity until now).
I think we still have a problematic mix of purposes with the automated tests.
We collect all files from a usage-examples
directory for the automated tests if they aren't explicitly excluded. Sounds like a good idea but it turns out that it is problematic because the usage examples we have so far (for GridLY and e.g. partial compilation) don't cover all situations.
Usage examples are part of the documentation, which is particularly obvious with the GridLY example. With such usage examples it is a natural approach to provide alternatives that the user can experiment with by uncommenting certain lines to activate alternative behaviour.
So it seems usage examples are similar but not identical with unit tests in their organization. Therefore I propose a different policy for usage examples and unit tests:
Libraries can (i.e. are strongly encouraged to) have
- a (optionally hierarchic)
usage-examples
directory (usually on library top-level)
All*.ly
files in this (recursively) will be used for the auto-generated documentation
(which hasn't been implemented yet. We'll have to think about the explicit in-/exclude options in that context separately) - a (flat)
unit-tests
directory
All*.ly*
files in this will be used for automated tests with Travis. These tests are like LilyPond's regression tests, and library maintainers are responsible for keeping the tests up to date and comprehensive. All relevant commands and constellations should be covered by tests. Usually it is a good idea to write one*.ily
file containing the main includes and a bunch of smaller*.ly
files covering individual tests or coherent groups of tests. This will also make possible failures point more directly to the cause.
In cases where the usage examples are appropriate as unit tests it is not necessary to duplicate them as test files. Instead they can simply be included through the .automated-tests-include
approach.
I think this approach would reduce some collisions of concerns while still being straightforward and not imposing too much overhead or complexity on library maintainers.
And it shouldn't be complicated to change the implementation. The moment would be a good one because we a) do have a few examples we can use as proof-of-concept and b) we don't have too many examples that would have to be updated.