Passer au contenu principal

Running the MPAS framework on the cluster

The Model for Prediction Across Scales (MPAS) is a collaborative project for developing atmosphere, ocean and other earth-system simulation components for use in climate, regional climate and weather studies.

Compilation

First of all define a folder ${WORK} on the /work or the /scratch filesystem (somewhere where you have write permissions):

export WORK=/work/FAC/...
mkdir -p ${WORK}

Load the following relevant modules:

module load gcc/10.4.0
module load mvapich2/2.3.7
module load parallel-netcdf/1.12.2
module load parallelio/2.5.9-mpi

export PIO=$PARALLELIO_ROOT

Download the MPAS framework:

cd ${WORK}
git clone https://github.com/MPAS-Dev/MPAS-Model

Patch the MPAS Makefile:

sed -i 's/-ffree-form/-ffree-form -fallow-argument-mismatch/' ${WORK}/MPAS-Model/Makefile

Compile:

cd ${WORK}/MPAS-Model

make gfortran CORE=init_atmosphere AUTOCLEAN=true PRECISION=single OPENMP=true USE_PIO2=true
make gfortran CORE=atmosphere AUTOCLEAN=true PRECISION=single OPENMP=true USE_PIO2=true

Running a basic global simulation

Here we aim at running a basic global simulation, just to test that the framework runs. we need to proceed in three steps:

  1. Process time-invariant fields, which will be interpolated into a given mesh, this step produces a "static" file
  2. Interpolating time-varying meteorological and land-surface fields from intermediate files (produced by the
    ungrib component of the WRF Pre-processing System), this step produces an "init" file
  3. Run the basic simulation
Create the run folder and link to the binary files
cd ${WORK}
mkdir -p run
cd run
ln -s ${WORK}/MPAS-Model/init_atmosphere_model
ln -s ${WORK}/MPAS-Model/atmosphere_model
Get the mesh files
cd ${WORK}
wget https://www2.mmm.ucar.edu/projects/mpas/atmosphere_meshes/x1.40962.tar.gz
cd run
tar xvzf ../x1.40962.tar.gz
Create the configuration files for the "static" run

The namelist.init_atmosphere file:

cat << EOF > ${WORK}/run/namelist.init_atmosphere
&nhyd_model
config_init_case = 7
/
&data_sources
config_geog_data_path = '${WORK}/WPS_GEOG/'
config_landuse_data = 'MODIFIED_IGBP_MODIS_NOAH'
config_topo_data = 'GMTED2010'
config_vegfrac_data = 'MODIS'
config_albedo_data = 'MODIS'
config_maxsnowalbedo_data = 'MODIS'
/
&preproc_stages
config_static_interp = true
config_native_gwd_static = true
config_vertical_grid = false
config_met_interp = false
config_input_sst = false
config_frac_seaice = false
/
EOF

The streams.init_atmosphere file:

cat << EOF > ${WORK}/run/streams.init_atmosphere
<streams>
<immutable_stream name="input"
                  type="input"
                  precision="single"
                  filename_template="x1.40962.grid.nc"
                  input_interval="initial_only" />

<immutable_stream name="output"
                  type="output"
                  filename_template="x1.40962.static.nc"
                  packages="initial_conds"
                  output_interval="initial_only" />
</streams>
EOF
Proceed to the "static" run

First create a start_mpas_static.sbatch file (carefully replace on line #4 ACCOUNT_NAME by your actual project name and on line #6 appropriately type your e-mail address, or double-comment with an additional # if you don't wish to receive job notifications):

cat << EOF > ${WORK}/run/start_mpas_static.sbatch
#!/bin/bash -l

#SBATCH --account ACCOUNT_NAME
#SBATCH --mail-type ALL 
#SBATCH --mail-user <first.lastname>@unil.ch

#SBATCH --chdir ${WORK}/run
#SBATCH --job-name isca_held-suarez
#SBATCH --output=isca_held-suarez.job.%j

#SBATCH --partition cpu

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 16
#SBATCH --mem 8G
#SBATCH --time 00:29:59
#SBATCH --export ALL

module load gcc/10.4.0
module load mvapich2/2.3.7
module load parallel-netcdf/1.12.2
module load parallelio/2.5.9-mpi

export PIO=$PARALLELIO_ROOT

srun ./init_atmosphere_model
EOF

Compiling and running the Held-Suarez dynamical core test case

Compilation takes place automatically at runtime. After logging in to the cluster, create a SLURM script file start.sbatch with the following contents:

#!/bin/bash -l

#SBATCH --account ACCOUNT_NAME
#SBATCH --mail-type ALL 
#SBATCH --mail-user <first.lastname>@unil.ch

#SBATCH --chdir ${WORK}
#SBATCH --job-name isca_held-suarez
#SBATCH --output=isca_held-suarez.job.%j

#SBATCH --partition cpu

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 16
#SBATCH --mem 8G
#SBATCH --time 00:29:59
#SBATCH --export ALL

module load gcc/10.4.0
module load mvapich2/2.3.7
module load netcdf-c/4.8.1-mpi
module load netcdf-fortran/4.5.4

WORK=$(pwd)

export GFDL_BASE=${WORK}/Isca
export GFDL_ENV=curnagl-gfortran
export GFDL_WORK=${WORK}/isca_work
export GFDL_DATA=${WORK}/isca_gfdl_data

export C_INCLUDE_PATH=${NETCDF_C_ROOT}/include
export LIBRARY_PATH=${NETCDF_C_ROOT}/lib

sed -i "s/^NCORES =.*$/NCORES = $(echo ${SLURM_CPUS_PER_TASK:-1})/" ${GFDL_BASE}/exp/test_cases/held_suarez/held_suarez_test_case.py

${WORK}/isca_venv/bin/python $GFDL_BASE/exp/test_cases/held_suarez/held_suarez_test_case.py

You need to carefully replace, at the beginning of the file, the following elements:

  • On line 3: ACCOUNT_NAME with the project id that was attributed to your PI for the given project
  • On line 5: <first.lastname>@unil.ch with your e-mail address (or double-comment that line with an additional '#' if you don't wish to receive e-mail notifications about the status of the job)
  • On line 7: ${WORK} must be replaced with the absolute path (ex. /work/FAC/.../isca) to the chosen folder you created on the installation steps
  • On line 15-17: you can adjust the number of CPUs, the memory and the time for the job (the present values are appropriate for the default Held-Suarez example)

Then you can simply start the job:

sbatch start.sbatch