-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test all cases of DIT & NDIT vs. exptime and AutoExposure #428
Conversation
The previous absence of this keyword, which is usually present in the defaul.yaml, caused a failure in scopesimple. There is no reason not to include that keyword in the basic_instrument.
The absence of this created a error in the OpticalTain._repr_pretty_.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #428 +/- ##
==========================================
+ Coverage 74.83% 75.02% +0.19%
==========================================
Files 56 56
Lines 7823 7823
==========================================
+ Hits 5854 5869 +15
+ Misses 1969 1954 -15 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! I agree with the flowchart. The tests look good; I didn't know it was possible to xfail parameterized tests like that. Is the dit-and-ndit-in-kwargs-while-also-having-autoexposure case missing?
My main question is: did we properly thought about how the Quantization fits in?
_should_apply
of Quantization
currently checks !OBS.autoexpset
and !OBS.ndit
. Is that sufficent?
I suppose it could be sufficient, depending on the implementation of the functionality required to make test_kwargs_override_obs_dict
pass. Maybe we could add Quantiazation to the optical train and in each test call _should_apply()
and verify that it is properly applied or not?
Yeah, me neither until last week, when I just googled to see if this can be done 😅
Indeed, I think I missed that. Thanks for spotting it, I'll add.
Seems useful 👍 In general I think once we manage to find a better solution to the whole "cmds" thing (i.e. #387 et al.), a lot of these things might be smoother as well. But it should be possible to solve the issue at hand without that for now... |
Added missing combinations and Quantization. All tests that pass without Quantization also pass with it. It's a bit harder to tell which of the xfail ones would trip on the Quantization once they pass otherwise, but we'll see that when we implement the missing stuff. I tried to keep the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the Quantization!
I could say that we should also test the Quantization when the AutoExposure is not included, because in principle those are independent effects. As in, from the pipeline-developer perspective, AutoExposure is not necessary to simulate the raw data, but Quantization is. But you have gone out of your way to do TDD, so lets just do this.
It wasn't much work to add this, so I did... |
This is a hack to ensure AutoExposure and Quantization work properly for METIS. Should probably be removed once #428 is fixed properly.
Even after some recent improvements to AutoExposure (#424, #396), there are still some inconsistencies, which are also preventing #426 from moving forward.
To solve this systematically, I came up with the following tree of possible combinations:
(using bestagons for decisions instead of diamonds to limit vertical extent of the chart...)
I created
TestDitNdit
intest_basic_instrument
to (hopefully) cover all of this. I'm open to discussion about the location of this test class, but since it's imo more of an integration test (simulating the full optical train rather than an individual effect), I decided to put it here. It might also be worth considering to put theAutoExposure
effect in the basic instrument test package, maybe in a separate mode (e.g.autoimg
) instead of hacking it in as is done currently. Oh and (as commented), all the["!OBS.xy"]
meddling should really be done with context managers usingunittest.mock.patch.dict
, but for some (nested mapping) reason, I couldn't get that to work 🙄Anyway, some of these tests currently xfail, which is expected. I'll put the actual solution(s) in a separate PR, this is just so we at least know what's working and what isn't...