Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LOWEFFTIME fibers for one amplifier of petal=1 for 20250114+ #274

Open
araichoor opened this issue Jan 19, 2025 · 3 comments
Open

LOWEFFTIME fibers for one amplifier of petal=1 for 20250114+ #274

araichoor opened this issue Jan 19, 2025 · 3 comments
Labels
dailyops For listing individual dailyops problems

Comments

@araichoor
Copy link
Contributor

visible starting on 20250114 (and present until now 20250117), one amplifier of petal=1 has its fibers with lower effective time.
maybe the effect was there before but not visible in backup tiles.

on 20250114, this would mostly appear as lower-than-normal fibers, e.g. here for tileid=23058:
Image

and it became more frequent, with all those fibers being flagged as LOWEFFTIME; e.g. here for 27283 (20250115, expid=273667):
Image

though it s not systematic, as successive exposures can be/not be affected by that.

on 20250117, it affected 5/5 of the observed dark tiles..

if relevant, recent work was done on r1 and z1.
don t know if that s relevant here, but in case: the z1 dark has some features, e.g.:
Image

@araichoor
Copy link
Contributor Author

if I correctly read @julienguy slack message (https://desisurvey.slack.com/archives/C01HNN87Y7J/p1737417891193489), this is due to some incorrect measured gains after changes made to the r1 readout.
those can lead to incorrect throughput, and that to LOWEFFTIME flag.

so, if I m correct, a re-processing with correct gain would make those LOWEFFTIME flag disappear, right?

my question is:

  • I qa-validated those tiles, hence, if we make an mtl-update with the current processing, all those LOWEFFTIME fibers would be put back for re-observation?

if so, we may want to re-process (+re-qa) before mtl-updating?

for reference, the mtl-updating ignores the fibers flagged with NODATA or BAD_SPECQA|BAD_PETALQA:
https://github.com/desihub/desitarget/blob/5df2bfa454b51149df4d49240e962d6bf8c76437/py/desitarget/mtl.py#L658-L670

the BAD_SPECQA flag is set here, based on the bad_qafstatus_mask mask:
https://github.com/desihub/desispec/blob/4d2afb2dcda7ae0b3215db017104d13ef3600957/py/desispec/zmtl.py#L266-L274

and the LOWEFFTIME flag is in the list of the bad_qafstatus_mask flags:
https://github.com/desihub/desispec/blob/4d2afb2dcda7ae0b3215db017104d13ef3600957/py/desispec/data/qa/qa-params.yaml#L40

@schlafly
Copy link
Contributor

Yes, it would be better to reprocess first, for the reasons you describe. But we're only saving 5% of the fibers so if we don't have enough tiles to observe we should go ahead with an MTL update anyway.

@geordie666
Copy link
Collaborator

I'm comfortable with losing ~5% of the fibers. But, I think we can also set the QA for the relevant tiles back to none while we re-process and then do an MTL update without picking up the LOWEFFTIME tiles.

That's probably a reasonable solution, even to just recover ~5% of the fibers, as it's unlikely we have many, or any, tiles that overlap these LOWEFFTIME tiles anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dailyops For listing individual dailyops problems
Projects
None yet
Development

No branches or pull requests

3 participants