-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added isd_generate command line program. #437
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just needs a test. You should be able to just mock out ale.loads and check the logic in entry point.
try: | ||
file_to_isd(args.input[0], args.out, log_level=log_level) | ||
except Exception as err: | ||
# Seriously, this just throws a generic Exception? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be an issue
fixes #436 |
Not sure what you mean by "needs a test?" I have verified that this works by setting up a conda development environment based on environment.yml, and then done a If you are asking me to verify that I'll also note that the semantics that this module uses for |
I mean this needs a regression test in the pytest suite. For the exception, I'm going to make an issue so that we can get that modified to be a more specific exception. |
@rbeyer Thanks for the PR - it was awesome to see this extension to the code and to get to test it! I just tested this on some attached Kaguya data and wonder how this might be made to support that use case. Everything below might be a product of my own misunderstanding of how best to accomplish getting an ISD from an ISIS cube file. I am having to use both the lbl and the ISIS cube in order to get a valid ISD generated, like so:
So, this requires that I pass a few more args that this PR supports passing to the ale.loads func. @jessemapel is what I am doing necessary to be able to generate an ISD from an ISIS cube? And if so, how could @rbeyer extend the bin script so that it was usable in this use case. (I am assuming my use case is pretty standard - I have an L1 ISIS cube and I would like a CSM ISD to accompany it). It does strike me as odd that I am having to pass the cube and the lbl file. |
My understanding of ale is minimal, so I can't speak to how typical or not your situation is. I only tested this on LROC NAC images, and |
🤷♂️ Not 100% sure here either. I absolutely LOVE the entry point and the ease of use of this, so if it covers the majority of the use cases, I think it's great. If we can extend it slightly without making it a gnarly Swiss army knife, that would be cool. I suspect we will just have to wait for @jessemapel or @Kelvinrr or @acpaquette. |
@jlaura's use case a little unique as what you are doing is explicitly setting the kernels you want to use for your ISD generation. So, you are using the cube to determine the kernel set and then generating the ISD from the label and those kernels. I'm not sure of a good way to handle this case within the entry point because you can also do stuff like pass a list of kernels to use directly. For example, generate ISDs from all of these images using a single specific meta kernel. For Kaguya TC this is largely a work around to the fact that we don't have a driver to generate ISDs directly from the spiceinit'd cube. All we have is a driver t generate ISDs from a PDS3 label and SPICE Kernels. LRO NAC, on the other hand, has drivers for: PDS3 label w/ SPICE kernels, ISIS label w/ SPICE kernels, and a spiceinit'd cube. |
I don't see any reason why this can't be accommodated. Surely, there are more complicated ways to use the library, but seems like I should be simple to offer a "-kernel" argument that will either take an ISIS cube from which to harvest kernels (via We would run |
That approach works well for a single image, but with multiple images it's harder to determine what to do. The best solution right now is probably just accept a meta-kernel and hope that it has enough kernel coverage to cover all of the require images. For parsing the kernels off a spiceinit'd cube, I am concerned about having adequate kernel coverage for all of the input images. For some missions, you can get away with all the images within a month or even a year, but that's not guaranteed. We really need a per image kernel set. @Kelvinrr has been working on that with SpiceQl and it's getting close to read. I suspect it could solve this problem for us. |
I was playing around trying to see if I could make a --kernel option work, but when I try and run this on the data that Jay attached, I get a |
Such a tool will have to take into account that various datasets need various flavors of the code, as Jesse says above. So far I catalogued three "subspecies", at: https://stereopipeline.readthedocs.io/en/latest/examples.html#creation-of-csm-camera-files Hopefully at some point a single code can quietly take care of all these scenarios, assuming an input .IMG file is present (a cub can be created, and an .LBL can be asked if needed for some cases). The user should not have to know that different strategies are used for different sensors. |
Oleg, I wrote this based off of the scripts you provided in the ASP docs, and I thought it was crazy that there were a bunch of them, and we were relying on users to copy'n'paste code, and figured that this was something that should just be part of ALE. For ease of reference, I'll call those three "subspecies" the dawn case, the ctx case (although the URL says lronac, the instructions above that section are for CTX images), and the mrf case. The code in this PR covers all three cases via these command line patterns and successfully generates JSON files:
FYI, I originally did test this on LRO NAC images, and the use case for them is the "simple" case, identical to the CTX case. In fact, it is super-fun in a directory of LROC spiceinit'ed cube files to just run If I try the "simple" case with dawn.cub or mrf.cub, I get a "No Such Driver for Label" error. I don't know enough about MiniRF, Dawn, or ALE--quite honestly--to know why you need to separately run If there's guidance to give in the documentation here, I'd be happy to include it. But at the very least this entry point can handle those cases (if you know how to provide the right arguments under the right circumstances). That being said, the example files that Jay provided still yield an unhelpful "No Such Driver for Label" error, so I still can't quite test that use case. It doesn't even work using his exact code. Maybe because the cube files was made on another machine, and I don't have the Kaguya kernels on my machine? Not sure. Anything else holding up this PR? Its just kind of been sitting here for months. |
Nuts,I realized I hadn't pushed the most recent changes to this fork/branch (with the -k option ability) and I may have screwed the history up. I'll see if I can repair here and make this clean. Apologies. |
There we go (I think). |
Aha! Jay's Kaguya example was the "your-labels-are-not-on-my-machine" problem. Once I downloaded them, and the original PDS data:
And this follows the dawn pattern--which I don't understand--but at least this allows this to process those images, too. As long as you understand why you need to call it like this (which I don't), which ultimately comes down to why you need to call |
The short answer is that under the hood each sensor and its data has its own specifics. Upon starting, there must be a check if for a given sensor the user provided what it expects. I pointed that out here: #451 |
I will be modifying the repo history as part of #440 and this PR still needs tests. If you can get the tests added by the end of the day Friday, May 26, I can work on getting this merged. Otherwise, it will need to be rebased onto the new master branch. I can preform the re-base if you allow maintainers permission to modify your PR. |
Can you not see the |
Sorry, bit of a fever right now. I think this is good to go in light of #440. The discussion about how to use this with different kernel selection can be addressed in a different PR. |
* feat(isd_generate.py): added entry point for command line program to generate ISDs. * feat(isd_generate.py): Added --version option which indicates the ale version. * feat(isd_generate.py): Added the ability to provide a kernel file.
* feat(isd_generate.py): added entry point for command line program to generate ISDs. * feat(isd_generate.py): Added --version option which indicates the ale version. * feat(isd_generate.py): Added the ability to provide a kernel file.
This adds isd_generate as a Python console script entry point that will be made available to users to easily convert images or files to a corresponding JSON ISD file.
It could certainly be significantly simpler, but this has some multi-processing fanciness for when a user wants to generate a whole batch of ISD files, and then also some logging (which ended up being complicated because of the multiprocessing, etc.).
I noticed that there was no AUTHORS file for me to contribute to or fancy PR template, but I declare that I am dedicating any and all copyright interest in this software to the public domain. I make this dedication for the benefit of the public at large and to the detriment of my heirs and successors. I intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.