Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test all cases of DIT & NDIT vs. exptime and AutoExposure #428

Merged
merged 8 commits into from
Jun 26, 2024

Conversation

teutoburg
Copy link
Contributor

@teutoburg teutoburg commented Jun 14, 2024

Even after some recent improvements to AutoExposure (#424, #396), there are still some inconsistencies, which are also preventing #426 from moving forward.

To solve this systematically, I came up with the following tree of possible combinations:

flowchart TD
  ditkw{{"dit & ndit in kwargs"}} -- yes --> done1["Just use those"]
  expkw{{"exptime in kwargs"}}
  ae1{{"AutoExposure in
           optical train?"}} -- yes --> ae1b["`Get dit & ndit form AutoExposure using exptime from kwargs`"]
  ditobs{{"dit & ndit in !OBS"}} -- yes --> done2["Use dit & ndit
                                                                              form !OBS"]
  expobs{{"exptime in !OBS"}} -- no  --> ValueError
  ae2{{"AutoExposure in
           optical train?"}} -- no --> ValueError
  ae2b["`Get dit & ndit form AutoExposure using exptime from !OBS`"]
  ditkw  -- no  --> expkw
  expkw  -- yes --> ae1
  expkw  -- no  --> ditobs
  ditobs -- no  --> expobs
  expobs -- yes --> ae2
  ae2 -- yes --> ae2b
  ae1 -- no --> ValueError
Loading

(using bestagons for decisions instead of diamonds to limit vertical extent of the chart...)

I created TestDitNdit in test_basic_instrument to (hopefully) cover all of this. I'm open to discussion about the location of this test class, but since it's imo more of an integration test (simulating the full optical train rather than an individual effect), I decided to put it here. It might also be worth considering to put the AutoExposure effect in the basic instrument test package, maybe in a separate mode (e.g. autoimg) instead of hacking it in as is done currently. Oh and (as commented), all the ["!OBS.xy"] meddling should really be done with context managers using unittest.mock.patch.dict, but for some (nested mapping) reason, I couldn't get that to work 🙄

Anyway, some of these tests currently xfail, which is expected. I'll put the actual solution(s) in a separate PR, this is just so we at least know what's working and what isn't...

The previous absence of this keyword, which is usually present in the
defaul.yaml, caused a failure in scopesimple. There is no reason not to
include that keyword in the basic_instrument.
The absence of this created a error in the OpticalTain._repr_pretty_.
@teutoburg teutoburg self-assigned this Jun 14, 2024
@teutoburg teutoburg added the tests Related to unit or integration tests label Jun 14, 2024
Copy link

codecov bot commented Jun 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 75.02%. Comparing base (4228950) to head (9f3c668).
Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #428      +/-   ##
==========================================
+ Coverage   74.83%   75.02%   +0.19%     
==========================================
  Files          56       56              
  Lines        7823     7823              
==========================================
+ Hits         5854     5869      +15     
+ Misses       1969     1954      -15     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@teutoburg teutoburg changed the title Fh/ditkwargs Test all cases of DIT & NDIT vs. exptime and AutoExposure Jun 17, 2024
@teutoburg teutoburg marked this pull request as ready for review June 17, 2024 09:17
@teutoburg teutoburg added the effects Related to a ScopeSim effect label Jun 17, 2024
@teutoburg teutoburg requested a review from hugobuddel June 17, 2024 09:18
Copy link
Collaborator

@hugobuddel hugobuddel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! I agree with the flowchart. The tests look good; I didn't know it was possible to xfail parameterized tests like that. Is the dit-and-ndit-in-kwargs-while-also-having-autoexposure case missing?

My main question is: did we properly thought about how the Quantization fits in?

_should_apply of Quantization currently checks !OBS.autoexpset and !OBS.ndit. Is that sufficent?

I suppose it could be sufficient, depending on the implementation of the functionality required to make test_kwargs_override_obs_dict pass. Maybe we could add Quantiazation to the optical train and in each test call _should_apply() and verify that it is properly applied or not?

@teutoburg
Copy link
Contributor Author

I didn't know it was possible to xfail parameterized tests like that.

Yeah, me neither until last week, when I just googled to see if this can be done 😅
I might go over our whole test suite in a separate PR and check for anything the XPASSes consistently and remove the xfail from those, or parametrize where needed...

Is the dit-and-ndit-in-kwargs-while-also-having-autoexposure case missing?

Indeed, I think I missed that. Thanks for spotting it, I'll add.

Maybe we could add Quantiazation to the optical train and in each test call _should_apply() and verify that it is properly applied or not?

Seems useful 👍

In general I think once we manage to find a better solution to the whole "cmds" thing (i.e. #387 et al.), a lot of these things might be smoother as well. But it should be possible to solve the issue at hand without that for now...

@teutoburg
Copy link
Contributor Author

Added missing combinations and Quantization. All tests that pass without Quantization also pass with it. It's a bit harder to tell which of the xfail ones would trip on the Quantization once they pass otherwise, but we'll see that when we implement the missing stuff.

I tried to keep the reason in the xfail as precise as possible (as far as I can tell why they're failing), so that should be a good starting point to summarize what's actually not working (getting closer to TDD I guess).

@teutoburg teutoburg requested a review from hugobuddel June 25, 2024 09:49
Copy link
Collaborator

@hugobuddel hugobuddel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the Quantization!

I could say that we should also test the Quantization when the AutoExposure is not included, because in principle those are independent effects. As in, from the pipeline-developer perspective, AutoExposure is not necessary to simulate the raw data, but Quantization is. But you have gone out of your way to do TDD, so lets just do this.

@teutoburg
Copy link
Contributor Author

I could say that we should also test the Quantization when the AutoExposure is not included

It wasn't much work to add this, so I did...

@teutoburg teutoburg merged commit 9d23900 into main Jun 26, 2024
25 checks passed
@teutoburg teutoburg deleted the fh/ditkwargs branch June 26, 2024 10:13
hugobuddel added a commit that referenced this pull request Jul 16, 2024
This is a hack to ensure AutoExposure and Quantization work properly
for METIS. Should probably be removed once #428 is fixed properly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effects Related to a ScopeSim effect tests Related to unit or integration tests
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants