The following five steps will guide you through the process of cloning, building and running the GODAS workflow.
The instructions below are for csh and bash.
During this process, three directories will be created:
-
CLONE_DIR : The directory where the system is cloned, user defined path.
-
MACHINE_ID : The name of the HPC that the system is installed, currently supported hera and orion
-
BUILD_COMPILER : Set the compiler that you would like to use. The options are intel18(Hera) or intel19 (Orion), depending on the machine.
-
EXPROOT : The directory where the EXPDIR is created, storing workflow configuration files, user defined path.
-
COMROOT : The directory where input and output of jobs are stored, user defined path.
-
DATAROOT : The directory where each job runs, user defined path.
-
RUNCDATE : The directory where the system runs, optionally defined by the user
-
setenv CLONE_DIR PATH/OF/YOUR/CHOICE
orexport CLONE_DIR=PATH/OF/YOUR/CHOICE
-
setenv MACHINE_ID hera
orexport MACHINE_ID=orion
setenv BUILD_COMPILER intel18
orexport BUILD_COMPILER=intel19
-
git clone https://github.com/NOAA-EMC/godas.git $CLONE_DIR
If automatic system build/test is preferred, see the instructions here. Otherwise, steps to manually set up the GODAS and test cases are as follows:
-
cd $CLONE_DIR
-
git submodule update --init --recursive
cd $CLONE_DIR/src
sh checkout.sh godas
sh link.sh godas $MACHINE_ID
sh build_DATM-MOM6-CICE5.sh
The bundle of repositories necessary to build SOCA
- Create the build directory for SOCA
mkdir -p $CLONE_DIR/build
cd $CLONE_DIR/build
- Load the JEDI modules
module purge
source $CLONE_DIR/modulefiles/$MACHINE_ID.$BUILD_COMPILER
source $CLONE_DIR/modulefiles/$MACHINE_ID.setenv
module list
- Clone all the necessary repositories to build SOCA
Hera:ecbuild --build=release -DMPIEXEC=$MPIEXEC -DMPIEXEC_EXECUTABLE=$MPIEXEC -DBUILD_ECKIT=YES ../src/soca-bundle
Orion:ecbuild -DBUILD_ECKIT=ON -DBUILD_METIS=ON -DBUILD_CRTM=ON ../ecbuild -DBUILD_ECKIT=ON -DBUILD_METIS=ON -DBUILD_CRTM=ON ../src/soca-bundle
make -j12
- Unit test the build
salloc --ntasks 12 --qos=debug --time=00:30:00 --account=marine-cpu
ctest
- Change the soca-config branch
The yaml files that configure the DA experiments live inside of the soca-config repository. For example, to checkout the feature branch for the 3DVAR:
cd $CLONE_DIR/src/soca-bundle/soca-config
git checkout develop
or alternatively, checkout your own branch or the branch you need to test with.
For detail instructions on how to install LETKF at any machine, see the LETKF repository. For GODAS, just run the following script:
sh $CLONE_DIR/src/letkf_build.sh
- cp $CLONE_DIR/src/mom6-tools.plot/*.py $CLONE_DIR/build/bin/
-
Create the directory that the workflow will be deployed:
mkdir -p EXPROOT
-
cd $CLONE_DIR/workflow
-
Create/Edit
user.yaml
based onuser.yaml.default
cp user.yaml.default user.yaml
edituser.yaml
Update the following fields in theuser.yaml
and save the file
EXPROOT: !error Please select a project directory.
FIX_SCRUB: True
COMROOT: !error Please select your COMROOT directory
DATAROOT: !error Please select your DATAROOT directory
\user_email: none
cpu_project: !error Please select your cpu project
hpss_project: !error Please select your hpss project
If the variable FIX_SCRUB is true, the RUNCDATE directory will be created in the COMROOT. Otherwise the RUNCDATE is created automatically at stmpX directory of the user.
-
cd $CLONE_DIR/workflow/CROW
-
Setup the workflow:
Select the machine name in upper case, e.g. HERA, a name for the workflow path, e.g. workflowtest001 and a case, e.g. the 3DVAR:
./setup_case.sh -p HERA ../cases/3dvar.yaml workflowtest001
This will setup the workflow in
workflowtest001
for the 3DVAR case on Hera.Available cases:
- 3dvar.yaml
- letkf_only_exp.yaml
- fcst_only.yaml
- hofx3d_only.yaml
Note: Each case files point to a corresponding layout file at $CLONE_DIR/workflow/layout.
-
Read output and run suggested command. Should be similar to:
./make_rocoto_xml_for.sh EXPROOT/workflowtest001
Assumption: All the subsystems have been compiled. The workflow can interactively as shown at step 3. below or as cronjob.
- Go into the test directory
cd EXPROOT/workflowtest001
- Load module rocoto
module load rocoto
- Start rocoto
rocotorun -w workflow.xml -d workflow.db
- Check status
rocotorun -w workflow.xml -d workflow.db & rocotostat -v 10 -w workflow.xml -d workflow.db
Or you could use "rocoto_viewer.py". Your terminal window needs to be wider than 125 chars
rocotorun -w workflow.xml -d workflow.db
python rocoto_viewer.py -w workflow.xml -d workflow.db
- Repeat step 4 until all jobs are completed.
resource_sum.yaml inside EXPDIR serves as a central place of resource settings. Changing the values(PET count, wall time) inside it and rerun CROW with the -f option could change the resource setting for this experiment.
./setup_case.sh -p HERA ../cases/3dvar.yaml test3dvar ./make_rocoto_xml_for.sh /scratch1/NCEPDEV/global/Jian.Kuang/expdir/test3dvar
There will be a resource_sum.yaml in EXPDIR named test3dv. Changing resource allocation values (time, npe) there and redo CROW:
./setup_case.sh -p HERA -f ../cases/3dvar.yaml test3dv ./make_rocoto_xml_for.sh /scratch1/NCEPDEV/global/Jian.Kuang/expdir/test3dvar
You could see the resources being updated in workflow.xml as well as config files.
- The log files of your experiment are at
EXPROOT/workflowtest001/log/
- The the setup files and outputs of the experiment are at
${RUNCDATE}
The user can change a limited set of parameters in the DA cases availabe under .../cases/
.
The yaml code snipet below shows the test example for the 3DVAR given in ./cases/3dvar.yaml
case:
settings:
SDATE: 2011-10-01t12:00:00 # Start Date of the experiment
EDATE: 2011-10-02t12:00:00 # End " "
godas_cyc: 1 # selection of godas DA window: 1(default)- 24h;
# 2 - 12h;
# 4 - 6h
# NOTE: ONLY OPTION 1 IS CURRENTLY SUPPORTED.
resolution: Ocean025deg # Other options: Ocean3deg, Ocean1deg, Ocean025deg
forcing: CFSR # CFSR or GEFS. It supports any forcing that satisfies the DATM-MOM6-CICE5 model and its setup
da_settings:
FCSTMODEL: MOM6CICE5 # Specifies the forecast model, the other option is MOM6solo
NINNER: 5 # Number of inner iteration in conjugate gradient solver
# Observation switches
DA_SST: True # Sea surface temperature
DA_TS: True # T & S Insitu profiles
DA_ADT: True # Absolute dynamic topography
DA_ICEC: True # Seaice fraction
NO_ENS_MBR: 2 # Size of ensemble, e.g. two members.
GROUPS: 2 # No of groups to run the ensemble, in this case, each member runs in a different group.
places:
workflow_file: layout/3dvar_only.yaml
The default is 1 member to 1 group (deterministic run).
The yaml code snipet below shows the test example for the hofx given in ./cases/hofx.yaml
case:
settings:
SDATE: 2011-10-01t12:00:00 # Start Date of the experiment
EDATE: 2011-10-02t12:00:00 # End " "
godas_cyc: 1 # selection of godas DA window: 1(default)- 24h;
# 2 - 12h;
# 4 - 6h
# NOTE: ONLY OPTION 1 IS CURRENTLY SUPPORTED.
resolution: Ocean025deg # Other options: Ocean3deg, Ocean025deg
FCSTMODEL: MOM6CICE5 # Specifies the forecast model, the other option is MOM6solo
da_settings:
# Observation switches
DA_SST: False # Sea surface temperature
DA_TS: False # T & S Insitu profiles
DA_ADT: True # Absolute dynamic topography
DA_ICEC: False # Seaice fraction
places:
workflow_file: layout/hofx_only.yaml
SOCA_ANALYSIS: "PATH/to/ANALYSIS" # Specifies the path to a completed 24h forecast model
- SOCA_ANALYSIS is the run directory of a finished experiment.
- The HofX output will be in the RUNCDATE/Data.
Cases | Forecast | 3DVAR | 3DHyb-EnVAR | LETKF |
---|---|---|---|---|
3° | ✔️MOM6solo | ✔️ MOM6solo | ✔️MOM6solo | 🔜MOM6solo |
1° | ✔️MOM6solo | ✔️ MOM6solo | ✔️MOM6solo | 🔜MOM6solo |
0.25° | ❌MOM6solo ✔️M6-C5 |
❌MOM6solo ✔️ M6-C5 |
❌MOM6solo 🔜M6-C5 |
❌MOM6solo 🔜M6-C5 |
✔️ Should work
❌ No implementation planned
🔜 Work in progress
- scripts/post_plot.sh has been updated and included in rocoto workflow. In post processing run, sea ice fraction, SSH, SST, and time_mean figures will be created in $RUNCDATE/Figures directory. Additional offline post processing and plotting tools are available in src/mom6-tools directory: see the instruction here.