-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate Model Variable Renaming Sprint changes in to GDASApp yamls and templates #1362
Comments
Started from g-w PR #2992 with
Changes to yamls (templates) thus far include
Using Puzzled by current failure in the variational analysis job
A check of |
With this local change in place the init job ran to completion. The var job successfully ran 3dvar assimilating only sondes. The job failed the reference check since the reference state assimilates amsua_n19 and sondes. Note that Has the default behavior for radiance data assimilation changed? Do we now require numerous surface fields be available? This makes sense if one wants to accurately compute surface emissivity. Surface conditions can also be used for data filtering and QC. This is a change from previous JEDI hashes. |
@RussTreadon-NOAA Is the failure in Jedi.remove_redundant()? Just so I know have I can fix #2992 |
@DavidNew-NOAA Yes, the traceback mentions
If you can fix this in g-w PR #2992, great! |
@DavidNew-NOAA : Updated working copy of
|
@RussTreadon-NOAA That newest commit didn't have a fix yet for this ob issue. I will work on it this morning. |
@RussTreadon-NOAA Actually, I just committed the changes you suggested. There's really no reason to mess with |
Forgot a line..make sure it's commit 7ac6ccb2bbf88b25fb533185c5d481cd328415ee (latest) |
Thank you @DavidNew-NOAA . |
@danholdaway , @ADCollard , and @emilyhcliu : When I update GDASApp JEDI hashes in
Updating the JEDI hashes brings in changes from the Model Variable Renaming Sprint. What changed in fv-3jedi, ufo, or vader which now requires the variables listed on the
|
This is failing because this if statement is not true when it should be. Likely because a variable is not being recognized as being present. Can you point me to your GDASapp and jcb-gdas code? |
@danholdaway : Here key directories and the job log file (all on Hercules):
|
Prints added to
There is no
Does the cube history file contain all the information we need to defined surface characteristics for radiance assimilation? |
In jcb-gdas you changed surface_geopotential_height to hgtsfc and tsea to tmpsfc. Perhaps try changing as:
Switching from the old short name to the IO name may have resulted in crossed wires. |
I think the sst change is because of https://github.com/JCSDA-internal/fv3-jedi/pull/1258 rather than variable naming conventions. |
Thank you @danholdaway for pointing me at fv3-jedi PR #1258. I see there was confusion over the name used for the skin temperature. This confusion remains when I Our cube sphere surface history files contain the following fields
There is neither Our tiled surface restart files contain the following fields starting with
The restart surface tiles contains Our atmospheric variational and local ensemble yamls now use The restart tiles have what appear to be fields for temperature over various surface types
Which temperature or combination of temperature should we pass to CRTM? I sidestepped this question and did a simple test. I renamed I can replace the variable name Tagging @emilyhcliu , @ADCollard , @CoryMartin-NOAA , and @DavidNew-NOAA . Two questions
The response to question 1 can be captured in this issue. Resolution of questions 2 likely needs a new issue. |
@RussTreadon-NOAA the issue might be in the mapping between tmpsfc and the long name in the FieldMetadata file. Do you know where that is coming from? It might be a fix file I guess. |
@danholdaway , you are right. I spent the morning wading through code, yamls, parm files, & fix files. I found the spot to make the correct linkage between fv3-jedi source code and our gfs cube sphere history files. With the change in place the variational and local ensemble DA jobs passed. The increment jobs failed. I still need to update the yamls for these jobs
The file I modified is
with
|
Thanks @RussTreadon-NOAA, really nice work digging through. If that fix file came directly from fv3-jedi (and was used in the fv3-jedi tests) there wouldn't have been any work to do so perhaps we should look into doing that. |
Agreed! We've been bitten by this disconnect more than once. |
Hercules test
The
@apchoiCMD , do you have a branch with changes that allow these tests to pass? The
@guillaumevernieres , do you know where / what needs to be changed in yamls or fixed files to get the marinevar test to pass? The log file for the failed job is |
g-w CI for DA Successfully run C96C48_ufs_hybatmDA g-w CI on Hercules. C96C48_hybatmaerosnowDA and C48mx500_3DVarAOWCDA fail. The C48mx500_3DVarAOWCDA failure is expected given ctest failures. The C96C48_hybatmaerosnowDA failure is in the 20211220 18Z
It is not clear from the traceback what the actual error is. Since this installation of GDASApp includes JEDI hashes with changes from the Model Variable Renaming Sprint, one or more yaml keywords or fix file keyword most likely need to be updated. @jiaruidong2017, @ClaraDraper-NOAA : Any ideas what we need to change in JEDI snow DA when moving to JEDI hashes which include changes from the Model Variable Renaming Sprint? The log file for the failed job is |
test_gdasapp update Install g-w PR #2992 on Hercules. Specifically, g-w branch
Log files for failed marine 3DVar jobs are in
The appears to a model variable renaming issue. Correcting the bmat job may allow the subsequent marine jobs to successfully run to completion. The log file for failed marine hyb job contains
This error may indicate that it is premature to run the marine letkf ctest. This test may need updates from g-w PR #3401. If true, this again highlights the problem we face with GDASApp getting several development cycles ahead of g-w. Tagging @guillaumevernieres , @AndrewEichmann-NOAA , and @apchoiCMD for help in debugging the marine DA and bufr2ioda_insitu failures. |
g-w CI update Install g-w PR #2992 on Hercules. Specifically, g-w branch The following g-w DA CI was configured and run
prgsi (1) and prjedi (2) successfully ran to completion
praero (3) and prwcda (4) encountered DEAD jobs which halted each parallel
The log files for the DEAD jobs are
|
Tagging @jiaruidong2017 , @ClaraDraper-NOAA , and @CoryMartin-NOAA for help with Executable |
@guillaumevernieres & @AndrewEichmann-NOAA : Is the failure of |
I checked the log file. The weird thing to me was that the fregrid was working for the background conversion but failed at the increment conversion. I guess the errors come from the generated increment file. I will continue to investigate it. @CoryMartin-NOAA @ClaraDraper-NOAA Any suggestions? |
It's possible the increment variable name changed? Regardless, |
@CoryMartin-NOAA , thanks for the update. Are the changes you refer to GDASApp issue #1324 and g-w issue #3002? Do we have branches with these changes to see if they work with the updated GDASApp JEDI hashes? Turning off g-w CI C96C48_hybatmaerosnowDA on all machines is certainly an option. Doing so also removes aerosol DA from g-w CI. |
I compared the background and increment files as below: The increment file:
Part of the background file:
The big difference is the dimension variables
Therefore, it seems to me that all variables including the dimension variables should be included in the netcdf data file with the new requirements. Any suggestions? |
@DavidNew-NOAA any insights here? |
@CoryMartin-NOAA @jiaruidong2017 FV3-JEDI PR #1289 modified the FMS2 IO interface to ensure that for each dimension, a dimension variable is written. Which FV3-JEDI hash was used to generate this increment? |
@DavidNew-NOAA : The most recent tests reported in this issue use GDASApp |
Hmm, this is strange. I need to study the FMS2 IO code a bit and figure out why the block that writes dimension variables may not be activated |
I'm also confused as to why variable renaming would have caused this feature to fail |
The Model Variable Renaming Sprint may not be responsible for the Updating JEDI hashes brings in a lot of other changes. For example, I think the |
I will build GW with the GDAS hash you mentioned and dig into this |
marinebmat failure
With these changes in place, marinebmat got further but eventually died with
This is a confusing error message. I am unable to examine Not sure where to go from here. Any suggestions @guillaumevernieres or @AndrewEichmann-NOAA ? |
@apchoiCMD , the
This is not correct. The GDASApp build directory contains I manually changed the
GDASApp
Why does |
@RussTreadon-NOAA Thanks for letting us know- I will let @givelberg know what is going on the inside of |
Made a quick chat with @givelberg and I expect that he will work on it- |
I'll try to start on this before the end of the week @RussTreadon-NOAA . The issue you report above is coming from code that needs to be updated in the gdasapp. |
Thank you @guillaumevernieres . I never thought to look at GDASApp code. I now see that |
@RussTreadon-NOAA I ran
I'm realized that I introduced a bug in FV3-JEDI PR #1289, which is why the attribute names are so strange. I just created FV3-JEDI PR #1304 to fix it, and I get the following output:
Either way, I'm getting variables associated with each axis. I'm not sure why your run is missing these variables. Perhaps you can double check your hash. However, |
With changes from fv3-jedi PR #1304, job Thanks @DavidNew-NOAA ! |
Several JEDI repositories have been updated with changes from the Model Variable Renaming Sprint. Updating JEDI hashes in
sorc/
requires changes in GDASApp and jcb-gdas yamls and templates. This issue is opened to document these changes.The text was updated successfully, but these errors were encountered: