Running the MPAS framework on the cluster
IscaThe Model for Prediction Across Scales (MPAS) is a frameworkcollaborative project for thedeveloping idealizedatmosphere, modelling of the global circulation of planetary atmospheres at varying levels of complexityocean and realism.other Theearth-system frameworksimulation is an outgrowth of models from GFDL designedcomponents for Earth'suse atmosphere,in butclimate, itregional mayclimate readilyand beweather extended into other planetary regimes.studies.
InstallationCompilation
First of all define a folder ${WORK} on the /work or the /scratch filesystem (somewhere where you have write permissions):
export WORK=/work/FAC/...
mkdir -p ${WORK}
Load the following relevant modules and create a python virtual environment:modules:
module load gcc/10.4.0
module load mvapich2/2.3.7
module load netcdf-c/4.8.1-mpiparallel-netcdf/1.12.2
module load netcdf-fortran/4.parallelio/2.5.49-mpi
moduleexport load python/3.9.13
python -m venv PIO=${WORK}/isca_venv
Install the required python modules:
${WORK}/isca_venv/bin/pip install dask f90nml ipykernel Jinja2 numpy pandas pytest sh==1.14.3 tqdm xarrayPARALLELIO_ROOT
Download and install the IscaMPAS framework:
cd ${WORK}
git clone https://github.com/ExeClim/Isca
cd Isca/src/extra/python
${WORK}/isca_venv/bin/pip install -e .MPAS-Dev/MPAS-Model
Patch the IscaMPAS makefile:Makefile:
sed -i 's/-fdefault-double-8$/ffree-form/-fdefault-double-8 \\\n -fallow-invalid-bozffree-form -fallow-argument-mismatch/' ${WORK}/Isca/src/extra/python/isca/templates/mkmf.template.gfortMPAS-Model/Makefile
Compile:
cd ${WORK}/MPAS-Model
make gfortran CORE=init_atmosphere AUTOCLEAN=true PRECISION=single OPENMP=true USE_PIO2=true
make gfortran CORE=atmosphere AUTOCLEAN=true PRECISION=single OPENMP=true USE_PIO2=true
Running a basic global simulation
Here we aim at running a basic global simulation, just to test that the framework runs. we need to proceed in three steps:
- Process time-invariant fields, which will be interpolated into a given mesh, this step produces a "static" file
- Interpolating time-varying meteorological and land-surface fields from intermediate files (produced by the
ungrib component of the WRF Pre-processing System), this step produces an "init" file - Run the basic simulation
Create the environmentrun filefolder and link to the binary files
cd ${WORK}
mkdir -p run
cd run
ln -s ${WORK}/MPAS-Model/init_atmosphere_model
ln -s ${WORK}/MPAS-Model/atmosphere_model
Get the mesh files
cd ${WORK}
wget https://www2.mmm.ucar.edu/projects/mpas/atmosphere_meshes/x1.40962.tar.gz
cd run
tar xvzf ../x1.40962.tar.gz
Create the configuration files for curnagl:the "static" run
The namelist.init_atmosphere
file:
cat << EOF > ${WORK}/Isca/src/extra/env/curnagl-gfortranrun/namelist.init_atmosphere
echo&nhyd_model
Loadingconfig_init_case basic= gfortran7
environment/
&data_sources
config_geog_data_path = '${WORK}/WPS_GEOG/'
config_landuse_data = 'MODIFIED_IGBP_MODIS_NOAH'
config_topo_data = 'GMTED2010'
config_vegfrac_data = 'MODIS'
config_albedo_data = 'MODIS'
config_maxsnowalbedo_data = 'MODIS'
/
&preproc_stages
config_static_interp = true
config_native_gwd_static = true
config_vertical_grid = false
config_met_interp = false
config_input_sst = false
config_frac_seaice = false
/
EOF
The streams.init_atmosphere
file:
cat << EOF > ${WORK}/run/streams.init_atmosphere
<streams>
<immutable_stream name="input"
type="input"
precision="single"
filename_template="x1.40962.grid.nc"
input_interval="initial_only" />
<immutable_stream name="output"
type="output"
filename_template="x1.40962.static.nc"
packages="initial_conds"
output_interval="initial_only" />
</streams>
EOF
Proceed to the "static" run
First create a start_mpas_static.sbatch
file (carefully replace on line #4 ACCOUNT_NAME
by your actual project name and on line #6 appropriately type your e-mail address, or double-comment with an additional #
if you don't wish to receive job notifications):
cat << EOF > ${WORK}/run/start_mpas_static.sbatch
#!/bin/bash -l
#SBATCH --account ACCOUNT_NAME
#SBATCH --mail-type ALL
#SBATCH --mail-user <first.lastname>@unil.ch
#SBATCH --chdir ${WORK}/run
#SBATCH --job-name isca_held-suarez
#SBATCH --output=isca_held-suarez.job.%j
#SBATCH --partition cpu
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 16
#SBATCH --mem 8G
#SBATCH --time 00:29:59
#SBATCH --export ALL
module load gcc/10.4.0
module load mvapich2/2.3.7
module load netcdf-c/4.8.1-mpiparallel-netcdf/1.12.2
module load netcdf-fortran/4.parallelio/2.5.4
# this defaults to ia64, but we will use gfortran, not ifort9-mpi
export GFDL_MKMF_TEMPLATE=gfortPIO=$PARALLELIO_ROOT
exportsrun F90=mpifort
export CC=mpicc./init_atmosphere_model
EOF
Compiling and running the Held-Suarez dynamical core test case
Compilation takes place automatically at runtime. After logging in to the cluster, create a SLURM script file start.sbatch with the following contents:
#!/bin/bash -l
#SBATCH --account ACCOUNT_NAME
#SBATCH --mail-type ALL
#SBATCH --mail-user <first.lastname>@unil.ch
#SBATCH --chdir ${WORK}
#SBATCH --job-name isca_held-suarez
#SBATCH --output=isca_held-suarez.job.%j
#SBATCH --partition cpu
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 16
#SBATCH --mem 8G
#SBATCH --time 00:29:59
#SBATCH --export ALL
module load gcc/10.4.0
module load mvapich2/2.3.7
module load netcdf-c/4.8.1-mpi
module load netcdf-fortran/4.5.4
WORK=$(pwd)
export GFDL_BASE=${WORK}/Isca
export GFDL_ENV=curnagl-gfortran
export GFDL_WORK=${WORK}/isca_work
export GFDL_DATA=${WORK}/isca_gfdl_data
export C_INCLUDE_PATH=${NETCDF_C_ROOT}/include
export LIBRARY_PATH=${NETCDF_C_ROOT}/lib
sed -i "s/^NCORES =.*$/NCORES = $(echo ${SLURM_CPUS_PER_TASK:-1})/" ${GFDL_BASE}/exp/test_cases/held_suarez/held_suarez_test_case.py
${WORK}/isca_venv/bin/python $GFDL_BASE/exp/test_cases/held_suarez/held_suarez_test_case.py
You need to carefully replace, at the beginning of the file, the following elements:
- On line 3: ACCOUNT_NAME with the project id that was attributed to your PI for the given project
- On line 5: <first.lastname>@unil.ch with your e-mail address (or double-comment that line with an additional '#' if you don't wish to receive e-mail notifications about the status of the job)
- On line 7: ${WORK} must be replaced with the absolute path (ex. /work/FAC/.../isca) to the chosen folder you created on the installation steps
- On line 15-17: you can adjust the number of CPUs, the memory and the time for the job (the present values are appropriate for the default Held-Suarez example)
Then you can simply start the job:
sbatch start.sbatch