-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reviewer 1 response #74
Comments
Hmmm, yeah it looks a bit messy. What about just putting maps of T/S bias in the supplementary? |
Yep. I agree. Looks like crap :). Stash in supplementary. I am curious to see how 5d looks ... e.g. is CDW outside of upwelling range in obs? |
Re: Fig 7 sea ice evaluation relative to obs. Below the black contour is annual mean sea ice extent (conc=0.15) in the RYF (green) control and the 1985-1995 mean from NOAA/G02202_V3 obs data (cyan). We would need to extend the map to ~55S to show the full contour - at the expense of clarity of the thickness and vectors. Also, others may want to see extent change in the perturbation. Thoughts? |
Yeah, I think my vote is for keeping these figures clean and putting the validation in the supplementary again? |
@PaulSpence for the WOA comparison plots above, sorry to be picky, but it's maybe not so good to compare with model initial conditions, because these use January data in the upper ocean, so will be biased compared with a model annual average. There are monthly averages of WOA data on the model grid here: |
Also the bottom T/S from WOA looks really wacky. Why is it so cold in the southern Amundsen / Bellingshausen? And Totten and Vincennes Bay are ridiculously warm! For comparison, here is Schmidtko: I reckon we should compare to Schmidtko instead. There is code here for loading and plotting Schmidtko data. |
Downside of the Schmidtko data is that there're none in East Antarctica (~ 90-130E) |
Probably that means there’s not enough bottom data there for it to be worth
a comparison then? The Pauthenet climatology only goes down to 400m for
similar reasons.
We are also comparing to WOA on the 1000m isobath, so still capture some of
the bottom outflow properties compared to WOA obs. The WOA just looks crazy
to me on the shelf.
…On Mon, Apr 3, 2023 at 8:54 AM, Wilma Huneke ***@***.***> wrote:
Downside of the Schmidtko data is that there're none in East Antarctica (~
90-130E)
—
Reply to this email directly, view it on GitHub
<#74 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACA44U7WSEWYB7KQD3GQQSTW7H7S7ANCNFSM6AAAAAAWDGRO74>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Reviewer 1: Lines 103-115: I am aware of Kiss et al., 2020. However, I still recommend
authors show (1) the difference in their CONTROL compared to Kiss et al., 2020 and (2)
model evaluation compared to observations. I recommend showing simulated T and S for
coastal regions, comparison of sea ice extent, sea ice formation rate, AABW production, etc.
Even if it is similar to Kiss et al., model evaluation is crucial for readers to understand the
meanings of sensitivity experiments. If authors use the same run compared to Kiss et al., I
would like authors to repeat this information for new readers and cite papers including figure
numbers.
Reply: Paul: We use RYF which isn’t in Kiss et al. Is there another paper that evaluates
the RYF in this region that we can cite, e.g. Stewart et al? If not, I suggest: Add an
observational temp, salt and rho contour lines to Fig. 5a-c. Add an obs contour line to Fig.
5d. What obs T/S do we use? Just the model initial conditions? Also add sea ice ext control
and obs to Fig. 7
The text was updated successfully, but these errors were encountered: