Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hard-coded petal alignments data file #61

Open
joesilber opened this issue Apr 21, 2020 · 6 comments
Open

hard-coded petal alignments data file #61

joesilber opened this issue Apr 21, 2020 · 6 comments
Assignees

Comments

@joesilber
Copy link
Contributor

In ptl2fp.py, there is a utility function for getting petal alignment data:

def get_petal_alignment_data() :
global petal_alignment_dict
if petal_alignment_dict is None :
filename = resource_filename('desimeter',"data/petal-alignments.yaml")
ifile=open(filename)
petal_alignment_dict = yaml.safe_load(ifile)
ifile.close()
return petal_alignment_dict

The file path is hard-coded for grabbing the alignment data. This will cause versioning confusion in either of the following future cases:

  1. Running desimeter on the spare petal at LBNL.
  2. Running desimeter if any petals are replaced or retracted / inserted at the Mayall.

To avoid stateful confusion, I suggest that we have multiple configurations in petal-alignments.yaml. Identify each one by a unique key.

And ptl2fp() function will need an additional, required argument providing this key.

@julienguy
Copy link
Collaborator

This transform is coming directly from the online DB. It's generated by the script load_petal_alignments_from_db . The plan has been to replace this by a transform we would fit with desimeter, but this has not been implemented yet.

I would suggest different yaml files for totally different hardware configurations, like KPNO vs LBNL petal test, rather than different configs in the same file.

However, we can have several configurations in the yaml file to accommodate changes of petals, with range of dates for validity, like we have for the spectrographs calibrations, but I don't think we need that now.

@joesilber
Copy link
Contributor Author

So the ptl2fp() function would get an additional argument specifying the file name to use?

@julienguy
Copy link
Collaborator

We have to think about hardware configurations more generally. It's not just petal-alignments.yaml but also the coordinates of positioners and fiducials in the table fp-metrology.csv , the transform from tangent plane to focal plane in raytrace-tan2fp.json , the default transform from FVC to focal plane single-lens-fvc2fp.json.
I would have different data subdirectories for totally different hardware configurations like LBNL test petal vs KPNO focal plane. We may use an environment variable for this, maybe with a default corresponding to the current data directory if the variable is not set.

@joesilber
Copy link
Contributor Author

Yes I see.

My inclination is to have a single file with all the different configuration options. Then when you run an analysis, you pick one of those options. It gives you a bunch of key/value pairs saying which data to use. Like in pseudo-code:

configuration file would contain:

{'config a': {'petal-alignments': '2019-12-16 KPNO'},
 'config b': {'petal-alignments': '2020-04-21 LBNL'}
}

A data file like petal-alignments file would contain:

{'2019-12-16 KPNO': {the_data_from_KPNO_that_day...},
 '2020-04-21 LBNL': {the_data_from_LBNL_that_day...}
}

But of course you guys have much more expertise in how you want these things handled. And I understand there may be configuration management tools you already have in place that you want to use.

I do think this issue may arise quite soon, when we start running desimeter for the spare petal at LBNL.

@sbailey
Copy link
Collaborator

sbailey commented Apr 21, 2020

Documenting some thoughts before thinking some more:

  • Julien confirmed that everything that needs to be versioned is under py/desimeter/data, i.e. it is at least contained
  • the two things that need to be tracked are location, and a timestamp (though see below for possibly two flavors of timestamp)
  • If we have to make usability tradeoffs, let's try to keep it easy to do "now at KPNO", with overrides for "some other time at LBNL 6040" instead of making it equally difficult to do both

Two different versions of timestamps:

  1. our current best knowledge of what the state was on a given datetime
  2. what we thought the state was on that datetime (i.e. what we used at the time even if we now know that was incorrect)

(1) is mostly what we want now, and we are still regularly post-facto updating the params to apply to previous data. If desimeter starts being use more for operations, then (2) becomes more important. Good versioning could cover that, but would be a pain if we ever needed to replay what we thought the state was on N>>1 different nights without wanting to checkout N>>1 different versions of desimeter. desimodel faced similar questions for fiberassign, and effectively only supports (2) without supporting (1).

@joesilber
Copy link
Contributor Author

I like a single 2D master table, with columns == every single configurable option and rows == config names. Options being sometimes a parameter, but more often a unique key that references some lower level dataset. If you want to run a different state, you make a new row. But I'm no expert on this stuff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants