Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test maintainability #63

Open
ScanMountGoat opened this issue Aug 12, 2024 · 3 comments
Open

Improve test maintainability #63

ScanMountGoat opened this issue Aug 12, 2024 · 3 comments

Comments

@ScanMountGoat
Copy link
Owner

ScanMountGoat commented Aug 12, 2024

The current tests are difficult to edit when output needs to change. One approach is to place test input and output data in separate files for easier editing and syntax highlighting. The generated code can also be tested to see if it actually compiles.

@9SMTM6
Copy link

9SMTM6 commented Aug 14, 2024

Trying to collect some pros and cons for current process (inline quote!d reference) vs. some sort of snapshot testing.

I'll try to keep this up-to-date.

Pro:

  • no dependencies needed.
  • whats been done so far. No work needed.

Mixed:

  • due to the manual process differences in the generated code have to be explicitly acknowledged by changing the quoted code, vs. implicitly acknowledging it by going this is fine in the diff.
    • this can be good, since it encourages more thinking about the results of this change.
    • however because many aspects are repeated so often (e.g. the BindGroupX impl) this can be an arduous process that suppresses internal improvements or adjustments, because its too much work. This will grow in relevance as more tests are added.

Contra:

  • this bloats the size of the source-code - e.g. bindgroup.rs is currently ~ 2/3rds tests, with probably >60% of that code being quoted bindings.
    • diffs on github make it hard to differentiate between changes to the actual code and generated one
  • quoted sourcecode doesn't mean it will compile, and does not get linted
    • currently I have to add a bunch of lint exceptions to the generated code, if the code was in the tree and compiled and linted, some of these changes might better surface and be addressed.
  • dependencies for 'snapshot-testing' might not be required, but at the very least should be able to be done as dev- or build-dependencies.

@9SMTM6
Copy link

9SMTM6 commented Aug 14, 2024

We should check the output of assert_eq on changes. Easiest way would probably be to simply test how a change surfaces in wgsl_bindgen .

FYI, these are the lints I currently disable in my application. A lot if this is probably caused by stricter than normal lint rules.

#[allow(
    unused,
    elided_lifetimes_in_paths,
    clippy::approx_constant,
    clippy::module_name_repetitions,
    clippy::pattern_type_mismatch,
    clippy::unreadable_literal,
)]

@ScanMountGoat
Copy link
Owner Author

this bloats the size of the source-code

This is pretty normal if there are a lot of unit tests. The public API is pretty minimal, so these tests wouldn't work as integration tests in the tests folder. I am looking into consolidating some of the unit tests to avoid some repetition.

this can be an arduous process that suppresses internal improvements or adjustments, because its too much work

Most required changes to the tests have been fairly minor from my experience. Please open an issue if something requires more work to change than feels reasonable.

diffs on github make it hard to differentiate between changes to the actual code and generated one

I've started separating out the large WGSL inputs and Rust outputs into separate files. This should make the process easier and give a nicer experience with formatting and syntax highlighting.

dependencies for 'snapshot-testing' might not be required

You can write out the current output to a file in tests temporarily using std::fs::write and the defined pretty printing functions. If all the tests cases pass, I don't see a reason to need to also include the actual output as a separate file from the expected output. I'll reassess if it's worth using snapshot testing after I finish reworking the test suite.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants