Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
This PR introduces a new way of writing tests for validation errors that uses comments in the schema file.
A simple example:
The
//expect-error
comment is parsed to extract a start column (9), end column (13), error code (E009), and error message (the last bit). There is then an assertion that a validation error exists on the following line that matches this expectation.Migration
I didn't manually change all 766 files 😆 I converted all the existing tests using a little script.
Missing expectations
If a validation error is found for which there is no
//expect-error
comment the test will fail with a message like this:Missing errors
If no validation error is found that matches a
//expect-error
comment the test will fail with a message like this:Easier to see where the error occurs
Because the start and end column is part of the expectation it's much easier to see where an error is pointing to, and then hopefully we avoid weird issues like this:
One test, many errors
In our current setup we don't tend to put too many errors in each test case, which I think is partly because the
errors.json
file starts to get a bit hard to manage.With this approach it's much easier to have lots of expected errors in one schema file which I hope will lead us to have less test cases but with each test case having many expectations.
Duplicate Errors
This approach makes it much easier to spot where we have duplicate error reporting, for example:
Here we have two errors that basically say the same thing, which isn't very helpful.
Proto tests
I think at this stage we should move away from the proto tests e.g. the valid schema examples. Any valid use-case should be covered by our integration tests.