-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
alternative way to adjust area issues in FATES twostream #1310
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. I tested this PR with one of the cases that encountered this error. Without this fix, the case fails with
219: forrtl: severe (408): fort: (2): Subscript #3 of the array PARPROF_PFT_DIR_Z has value 12 which is greater than the upper bound of 11
but with this fix runs successfully well past this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, although I have one question about a possible necessary check against a negative area.
! call endrun(msg=errMsg(sourcefile, __LINE__)) | ||
! Test out a simpler way to correct area errors | ||
if(do_simple_area_correct) then | ||
twostr%scelg(ican,icolmax)%area = twostr%scelg(ican,icolmax)%area - (canopy_frac(ican)-1._r8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The canopy_frac(ican)
here is going to be greater than 1
at this point in the code correct? Is it possible that the value of canopy_frac(ican)-1._r8
could be greater than area
? If so we might end up with a negative final area and should have a limiter against this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The area being used here is from the element that has the largest area of the group. The group sums to 1. In a worst case scenario, lets say we had 1000 cohorts all sharing one layer, then the area of the largest element would be no less than 0.001. So in this case, the error would have to be larger than 0.001. Therefore, I don't think this should be a problem, but it shouldn't hurt to add in a check and complain/fail gracefully if this does happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added in a check. I put it behind a debug for now. I can run some tests to see if they trigger, and then set the debug flags back to false so that they don't slow down the run. Note that we really should not have layers that are that overfull, this is because we try to make the cohorts fit correctly during the ppa demotion/promotion scheme. These methods here try to make the area convergence tolerance even tighter, so that it doesn't introduce energy balance errors into the model.
Description:
Its possible that there is more horizontal area taken up by the cohorts than there is ground, which is simply a result of numerical imprecision in the rest of FATES. If this is true, then we need to somehow force the total area of the two-stream scattering elements to be exactly 1 and not slightly more. One way is to just chop off some area of the largest scattering element (simple way). This is simple, yet does not conserve total scattering depth. The other (which we already allow) is to chop off that area but also increase the LAI+SAI. The latter is conservative, but is creating indexing problems when when the LAI and SAI is increased enough such that it uses more depth bins than FATES anticipated.
Collaborators:
@jennykowalcz
Expectation of Answer Changes:
Good chance there will be very slight differences in two-stream tests. There should be no scientifically meaningful impacts to results.
Checklist
If this is your first time contributing, please read the CONTRIBUTING document.
All checklist items must be checked to enable merging this pull request:
Contributor
Integrator
Test Results:
TBD
CTSM (or) E3SM (specify which) test hash-tag:
CTSM (or) E3SM (specify which) baseline hash-tag:
FATES baseline hash-tag:
Test Output: