Software‎ > ‎Tutorials‎ > ‎

CMAQ Tutorial

Downloading CMAQ

  1. Create a user account and logon to
  2. Download code and test case using a web browser
    1. Hover over the “DOWNLOAD CENTER” on the left
    2. Select “Software”
    3. Select CMAQ from software family drop down and click submit
    4. Choose CMAQ 5-0-1, Linux PC, and GNU Compilers, then submit
    5. Click “View the script”
    6. Save the script to a working folder (e.g., /scratch/lfs/<user>/CMAQ/) as download_CMAQv5.0.1.csh
  3. Navigate to it in a terminal
  4. Edit download_CMAQv5.0.1.csh to comment out the last two lines (comments start with #)
  5. Run tcsh download_CMAQv5.0.1.csh
  6. Untar code and data 
    1.  tar xzf CMAQv5.0.1.tar.gz 
    2.  tar xzf DATA.CMAQv5.0.1.tar.gz 
    3.  tar xzf DATA_REF.CMAQv5.0.1.tar.gz


In your working directory, you should now have a directory structure like the following:


Make another directory lib at the same level as data. Now you have:


Configuring CMAQ

The first step is to let CMAQ know where it is and how you want it made. These settings are all in ./CMAQv5.0.1/scripts/config.cmaq.

  1. Set M3HOME to the absolute path to CMAQv5.0.1 in your working directory.
  2. Add the following commands at the beginning:
  3. module purge module load intel/2012 module load hdf5/1.8.9 module load netcdf/4.2 module load ioapi/3.1
  4. Uncomment the compiler settings for your compiler
    1. Use the Intel compiler for the HPC
    2. Use the gfortran compiler for an Ubuntu machine
  5. run 
    sed -nE -e "/module/p" -e "s/setenv[ ]+([^ ]+)[ ]+/export \1=/p" config.cmaq > config.cmaq.bash
  6. Use the script to set your environment
    1. if you use the bash shell
      1. run source config.cmaq.bash
    2. if you use the csh or tcsh
      1. run source config.cmaq
    3. If you got an error /bin/uname: No such file or directory or /bin/uname: Command not found.
      • Replace /bin/uname with uname anywhere it exists
Anytime you login to the server you must re run step 5 your source command.  On HPC, you should be using BASH.

The simple tutorial that was available in version 4.7 is no longer available in version 5, the process is quite similar. The steps from the 4.7 tutorial ( have been copied here and edited for UF. 

  1. Create (mkdir) the subdirectory $M3LIB and the following subdirectories under $M3LIB:

    mkdir -p $M3LIB mkdir -p $M3LIB/build/ mkdir -p $M3LIB/ioapi/ mkdir -p $M3LIB/netCDF/ mkdir -p $M3LIB/pario/ mkdir -p $M3LIB/stenex/
  2. Checkpoint

    ls $M3LIB
    should return:

    netCDF and IOAPI

    NetCDF and IOAPI must already exist on the system. This tutorial only covers linking them where CMAQ expects.


    The CMAQ build and run scripts assume that netCDF resides in the $M3LIB path as $M3LIB/netCDF/`uname -s``uname -r | cut -d. -f1`_`uname -i`_$compiler. If netCDF is installed elsewhere on your system, create a symbolic link in $M3LIB/netCDF/`uname -s``uname -r | cut -d. -f1`_`uname -i`_$compiler to the existing netCDF (see CVS_NETCDF).

    Example for the UF HPC Linux cluster:
    mkdir -p $M3LIB/netCDF/Linux2_x86_64ifort cd $M3LIB/netCDF/Linux2_x86_64ifort ln -s /apps/netcdf/4.2-intel/lib/libnetcdf* . 


    The CMAQ build and run scripts assume that IOAPI resides in the $M3LIB path as $M3LIB/ioapi_3.1/Linux2_${system}${compiler}.

    Example for the UF HPC Linux cluster:

    mkdir -p $M3LIB/ioapi_3.1/Linux2_x86_64ifort cd $M3LIB/ioapi_3.1/Linux2_x86_64ifort ln -s /apps/intel/2012/ioapi/3.1/lib/*.a . ln -s /apps/intel/2012/ioapi/3.1/lib/*.mod . ln -s /apps/intel/2012/ioapi/3.1/include/fixed_src ./
    IMPORTANT NOTE 1: For the next steps, make sure all of the bldi paths point to the correct netCDF and IOAPI libraries 
  3. In $M3HOME gunzip and untar the models archive tar file, M3MODELS.CMAQv5.0.1.tar.gz. This will produce the following subdirectories:

  4. Make a working directory (NOT in either the $M3MODEL, $M3LIB or $M3DATA trees), cd there and gunzip and untar M3SCRIPTS.CMAQv5.0.1.tar.gz. This will produce the following subdirectories, which contain “bldit” and “run” C-shell scripts and a GRIDDESC file (see item 17(b). under “other details” below):

    pario/ procan/ stenex/

    Not necessary, but for the sake of further discussion create an environment variable for the “scripts” working directory, $WORK.

  5. Next create the stencil exchange library required for parallel processing (se_snl) and serial processing (sef90_noop):

    cd $WORK/stenex ./ ./bldit.se_noop.Linux
  6. For parallel CCTM operation create the parallel I/O library (pario):

    cd $WORK/pario ./bldit.pario.Linux
  7. Create m3bld, the tool required to build the executables for the CMAQ processors, model and tools.

    cd $WORK/build ./bldit.m3bld

    Note: Although m3bld is really a tool, we put it in with the “libraries.”
  8. Now create the model executables: JPROC is created and run only once for the benchmark; ICON and BCON need to be compiled and run separately for profile data (coarse grid) and for nest data (fine grid); CCTM is compiled only once.

    Generally, you will need to get the MCIP3 code and run it to create met data from MM5 or WRF for CCTM. MCIP3 is packaged with this distribution. Optionally, it can be downloaded from the same site as this distribution package as a stand-alone installation. And of course, you will need “model-ready” emissions data - presumably from SMOKE. See the SMOKE readme file included with this package. For this release we have provided the model-ready emissions and met data.

    Start with JPROC (cd to $WORK/jproc). Invoke ./bldit.jproc.Linux. There will be a lot of text displayed to standard out (which you can capture of course, by redirecting to a file). The process should end with a JPROC executable, which is invoked in the second script, ./run.jproc, producing output data files. These data files will be inserted into the path predefined in the run script, $M3DATA/jproc.

    **Note: It’s always a good idea to capture in a log file the text written to standard out when running these models. In each “run” script, near the top, is a suggested method (e.g. for JPROC):

    ./run.jproc >& jproc.log &
  9. Check the JPROC log file to ensure complete and correct execution. Then cd to $WORK/icon and follow the same procedure; invoke ./bldit.icon.Linux, followed by ./run.icon >& icon.log &. This will produce the first (profile) dataset for the first run of CCTM on the coarse domain. After CCTM finishes, you will need to generate a nest dataset for the fine domain.

  10. Follow this procedure for BCON and CCTM.

  11. Finishing with CCTM, you should have a complete collection of datasets, which you can compare with the distribution datasets in DATA_REF.CMAQv5.0.1.tar.gz. Unless you modify the run scripts, the output data from all the models will reside in the following (automatically generated) paths:

  12. Concerning parallel CCTM operation: We have tested the “bldit” script for both serial and parallel compilation. The source code is the same for both. Only some libraries are different as well as the run scripts. The “stenex” library for parallel is different than for serial; “pario” is needed only for parallel. This release was set up and tested for a “standard” MPICH linux cluster, requiring the addition of a C code that distributes the run time environment from the node that launches the run to the other participating nodes. Thanks to Bo Wang and Zion Wang of CERT-UC-Riverside, who developed and tested this code. Also, see the PARALLEL_NOTES readme file. (Note: The initial concentrations pre-processor, ICON can also be executed in parallel, but we have not tested this for Linux clusters.)

  13. Concerning parallel CCTM operation (to run the model on muliple processors):

    Modify the bldit.cctm linux script as follows and build the multiple processor version of CMAQ:

    <  set APPL  = e1a
    >  set APPL  = e3a
    < #set ParOpt             # set for multiple PE's; comment out for single PE
    >  set ParOpt             # set for multiple PE's; comment out for single PE

    Then modify the run.cctm script as follows:

    < # Usage: run.cctm >&! cctm_e1a.log &                                            
    > # Usage: run.cctm >&! cctm_e3a.log &                                  
    <  set APPL     = e1a
    <  set CFG      = Linux2_x86_64pg
    >  set APPL     = e3a
    >  set CFG      = Linux2_x86_64pg
    <  setenv NPCOL_NPROW "1 1"; set NPROCS   = 1 # single processor setting
    < #setenv NPCOL_NPROW "4 2"; set NPROCS   = 8
    > #setenv NPCOL_NPROW "1 1"; set NPROCS   = 1 # single processor setting
    >  setenv NPCOL_NPROW "4 2"; set NPROCS   = 8
    < #set GC_ICfile = CCTM_e1aCGRID.d1b
    > #set GC_ICfile = CCTM_e3aCGRID.d1b
    <   time  $BASE/$EXEC
    > #time  $BASE/$EXEC
    < #set MPIRUN = /share/linux/bin/mpich-ch_p4/bin/mpirun
    < #set TASKMAP = $BASE/machines8
    < #cat $TASKMAP
    < #time $MPIRUN -v -machinefile $TASKMAP -np $NPROCS $BASE/$EXEC
    >  set MPIRUN = /share/linux/bin/mpich-ch_p4/bin/mpirun
    >  set TASKMAP = $BASE/machines8
    >  cat $TASKMAP
    >  time $MPIRUN -v -machinefile $TASKMAP -np $NPROCS $BASE/$EXEC

    Note: You can change the default script by using the Unix “patch” utility. Cut the indented section listed above into a file, say “mod.” Then type “patch run.cctm mod.”

  14. Other details:

    1. You can check output ioapi file headers (and data) using the netCDF utility ncdump. This utility will be located in the same place as netcdf, mentioned in (4) above.

    2. The GRIDDESC file contains horizontal projection and grid domain definitions that are required input for many CMAQ models. The run scripts for ICON, BCON, and CCTM contain environment variables that point to the GRIDDESC file.

      The horizontal grid definition can be set to window from the met and emissions input files. However, the window must be a “proper subset” (i.e., a subset from the interior of the domain and not including boundaries). Note: The domains represented by the met and emissions data must be the same.

    3. Running CCTM for a windowed domain or a higher resolution nested domain from larger or coarser met and emissions datasets requires creating initial and boundary data for the target domain using ICON and BCON.

  15. Comparing with References

    The fastest way to compare is using the NetCDF Operators ( and the PseudoNetCDF python library ( The commands below will display the minimum and maximum ratio for each variable in the dataset. Add the -v <VARNAME> option to the pncdump commands or ncbo command to apply to just one variable.

    ncbo --op_typ="/"
    pncdump -r ROW,min -r COL,min -r LAY,min -r TIME,min
    pncdump -r ROW,max -r COL,max -r LAY,max -r TIME,max
    Or make a figure of fractional bias as a function of concentration
    python -ic "from netCDF4 import Dataset newf = Dataset('') oldf = Dataset('') newo3 = newf.variables['O3'][:] * 1000 oldo3 = oldf.variables['O3'][:] * 1000 fbo3 = 2 * (newo3 - oldo3) / (newo3 + oldo3) import numpy as np dc = 5 lbs = np.arange(0.0, 120, dc) ubs = lbs + dc ubs[-1] = np.inf mbs = lbs + dc / 2. fbs = [] for lb, ub in zip(lbs, ubs): thisfb = > ub, oldo3 < lb), fbo3).compressed() fbs.append(thisfb) from matplotlib import use; use('Agg') import pylab as pl pl.boxplot(fbs, pos = mbs, whis = np.inf) pl.xticks(lbs.tolist() + [lbs.max() + dc], ['%.0f' % x for x in lbs.tolist() + [ubs.max()]]) pl.ylim(-2, 2) pl.setp(pl.gca().get_xticklabels(), rotation = 90) pl.savefig('fracbias_o3.png') "