-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report.py needs generalization #3
Comments
I'm currently looking into this for the twinotter. I'm going to write some tests for this. I was thinking that the test can run report.py for each different platform using data from an example flight+yaml file. So I don't mess up the original functionality when doing this for the twinotter, could someone point me to a good choice for the HALO data to use for this? @d70-t ? |
@LSaffin - It would be useful to clarify what the report is for. @d70-t mentioned that it had been used primarily to help debug segment assignment, but not all the tests would be applicable to all platforms. Perhaps tangential, but @d70-t and I discussed the idea of lightweight validation and reporting at pushes and/or pull requests (#2), and running |
@RobertPincus - I would say that report.py is for generating easy to look at plots for manually refining the .yaml files after they have been initially generated. Although I wouldn't need the information about sondes and lining the circles up, the default plot with the track, altitude, roll, pitch, and heading as well as the zoomed versions of these plots will be useful for refining the twinotter segment times. The tests I'm thinking about writing would just be for checking that report.py actually runs and produces an html for an individual flight for each platform. Just to check I don't break it. This would be the kind of thing that could be run on pull requests (although I don't know how to set that part up) |
Yes, in the current state, the The report currently does two separate things:
I would try to separate these two aspects a bit more, such that the formal validation part could also be run using a separate script which would return textual output, but can also be embedded into rich (e.g. HTML-) reports, like it is now. I'd work on the separation thing now in order to enable #4 Running the full reports would also be nice, but we've to figure out how to get the track data into CI. I think I'll have to look a bit closer into Aeris/Opendap/zarr/intake for that. |
@d70-t I can help with getting track data into CI via OpenDAP etc. |
@d70-t - Does your comment mean you are actively modifying report.py right now? If yes, I'll hold off on making changes. I think the main limitation to running it with other flight data is the specific assumptions about data layout and the naming conventions for HALO data being spread among the various functions. I was going to add something like |
@LSaffin I did not start yet modifying the files. I see what you mean, but I'd argue agains creating new platform classes. The code is already using quite some amount of xarray, so I'd propose to use that as well. Could we have platform specific What I planned to do was more to move out the SegmentChecker into a separate file and start with the validation script. But maybe it is still a bit too interwoven. |
scripts/report.py
was written specifically for HALO and needs generalization for other platforms.The text was updated successfully, but these errors were encountered: