CMAQ Tutorial

Downloading CMAQ

    1. Create a user account and logon to cmascenter.org
    2. Download code and test case using a web browser
        1. Hover over the “DOWNLOAD CENTER” on the left
        2. Select “Software”
        3. Select CMAQ from software family drop down and click submit
        4. Choose CMAQ 5-0-1, Linux PC, and GNU Compilers, then submit
        5. Click “View the script”
        6. Save the script to a working folder (e.g., /scratch/lfs/<user>/CMAQ/) as download_CMAQv5.0.1.csh
    3. Navigate to it in a terminal
    4. Edit download_CMAQv5.0.1.csh to comment out the last two lines (comments start with #)
    5. Run tcsh download_CMAQv5.0.1.csh
    6. Untar code and data
    7. tar xzf CMAQv5.0.1.tar.gz
    8. tar xzf DATA.CMAQv5.0.1.tar.gz
    9. tar xzf DATA_REF.CMAQv5.0.1.tar.gz

Checkpoint

In your working directory, you should now have a directory structure like the following:

./CMAQv5.0.1/models ./CMAQv5.0.1/scripts ./CMAQv5.0.1/data 

Make another directory lib at the same level as data. Now you have:

./CMAQv5.0.1/lib ./CMAQv5.0.1/models ./CMAQv5.0.1/scripts ./CMAQv5.0.1/data 

Configuring CMAQ

The first step is to let CMAQ know where it is and how you want it made. These settings are all in ./CMAQv5.0.1/scripts/config.cmaq.

    1. Set M3HOME to the absolute path to CMAQv5.0.1 in your working directory.
    2. Add the following commands at the beginning:
      1. module purge module load intel/2012 module load hdf5/1.8.9 module load netcdf/4.2 module load ioapi/3.1
    3. Uncomment the compiler settings for your compiler
        1. Use the Intel compiler for the HPC
        2. Use the gfortran compiler for an Ubuntu machine
    4. run
      1. sed -nE -e "/module/p" -e "s/setenv[ ]+([^ ]+)[ ]+/export \1=/p" config.cmaq > config.cmaq.bash
  1. Use the script to set your environment
      1. if you use the bash shell
        1. run source config.cmaq.bash
      2. if you use the csh or tcsh
          1. run source config.cmaq
      3. If you got an error /bin/uname: No such file or directory or /bin/uname: Command not found.
        • Replace /bin/uname with uname anywhere it exists

Anytime you login to the server you must re run step 5 your source command. On HPC, you should be using BASH.

The simple tutorial that was available in version 4.7 is no longer available in version 5, the process is quite similar. The steps from the 4.7 tutorial (https://www.cmascenter.org/cmaq/documentation/4.7/README.txt) have been copied here and edited for UF.

    1. Create (mkdir) the subdirectory $M3LIB and the following subdirectories under $M3LIB:
        1. mkdir -p $M3LIB mkdir -p $M3LIB/build/ mkdir -p $M3LIB/ioapi/ mkdir -p $M3LIB/netCDF/ mkdir -p $M3LIB/pario/ mkdir -p $M3LIB/stenex/
    2. Checkpoint
    3. ls $M3LIB
  1. should return:
    1. build/ ioapi/ netCDF/ pario/ stenex/
    2. netCDF and IOAPI
    3. NetCDF and IOAPI must already exist on the system. This tutorial only covers linking them where CMAQ expects.
    4. netCDF
    5. The CMAQ build and run scripts assume that netCDF resides in the $M3LIB path as $M3LIB/netCDF/`uname -s``uname -r | cut -d. -f1`_`uname -i`_$compiler. If netCDF is installed elsewhere on your system, create a symbolic link in $M3LIB/netCDF/`uname -s``uname -r | cut -d. -f1`_`uname -i`_$compiler to the existing netCDF (see CVS_NETCDF).
    6. Example for the UF HPC Linux cluster:
      1. mkdir -p $M3LIB/netCDF/Linux2_x86_64ifort cd $M3LIB/netCDF/Linux2_x86_64ifort ln -s /apps/netcdf/4.2-intel/lib/libnetcdf* .
    7. IOAPI
    8. The CMAQ build and run scripts assume that IOAPI resides in the $M3LIB path as $M3LIB/ioapi_3.1/Linux2_${system}${compiler}.
    9. Example for the UF HPC Linux cluster:
      1. mkdir -p $M3LIB/ioapi_3.1/Linux2_x86_64ifort cd $M3LIB/ioapi_3.1/Linux2_x86_64ifort ln -s /apps/intel/2012/ioapi/3.1/lib/*.a . ln -s /apps/intel/2012/ioapi/3.1/lib/*.mod . ln -s /apps/intel/2012/ioapi/3.1/include/fixed_src ./
    10. IMPORTANT NOTE 1: For the next steps, make sure all of the bldi paths point to the correct netCDF and IOAPI libraries
    11. In $M3HOME gunzip and untar the models archive tar file, M3MODELS.CMAQv5.0.1.tar.gz. This will produce the following subdirectories:
      1. models/ BCON/ BUILD/ CCTM/ ICON/ JPROC/ PARIO/ PROCAN/ STENEX/ TOOLS/ include/
    12. Make a working directory (NOT in either the $M3MODEL, $M3LIB or $M3DATA trees), cd there and gunzip and untar M3SCRIPTS.CMAQv5.0.1.tar.gz. This will produce the following subdirectories, which contain “bldit” and “run” C-shell scripts and a GRIDDESC file (see item 17(b). under “other details” below):
      1. scripts/ GRIDDESC1 bcon/ build/ cctm/ icon/ jproc/
        1. mcip4/
      2. pario/ procan/ stenex/
      3. Not necessary, but for the sake of further discussion create an environment variable for the “scripts” working directory, $WORK.
    13. Next create the stencil exchange library required for parallel processing (se_snl) and serial processing (sef90_noop):
        1. cd $WORK/stenex ./bldit.se.Linux ./bldit.se_noop.Linux
    14. For parallel CCTM operation create the parallel I/O library (pario):
        1. cd $WORK/pario ./bldit.pario.Linux
    15. Create m3bld, the tool required to build the executables for the CMAQ processors, model and tools.
        1. cd $WORK/build ./bldit.m3bld
    16. Note: Although m3bld is really a tool, we put it in with the “libraries.”
    17. Now create the model executables: JPROC is created and run only once for the benchmark; ICON and BCON need to be compiled and run separately for profile data (coarse grid) and for nest data (fine grid); CCTM is compiled only once.
      1. Generally, you will need to get the MCIP3 code and run it to create met data from MM5 or WRF for CCTM. MCIP3 is packaged with this distribution. Optionally, it can be downloaded from the same site as this distribution package as a stand-alone installation. And of course, you will need “model-ready” emissions data - presumably from SMOKE. See the SMOKE readme file included with this package. For this release we have provided the model-ready emissions and met data.
      2. Start with JPROC (cd to $WORK/jproc). Invoke ./bldit.jproc.Linux. There will be a lot of text displayed to standard out (which you can capture of course, by redirecting to a file). The process should end with a JPROC executable, which is invoked in the second script, ./run.jproc, producing output data files. These data files will be inserted into the path predefined in the run script, $M3DATA/jproc.
      3. **Note: It’s always a good idea to capture in a log file the text written to standard out when running these models. In each “run” script, near the top, is a suggested method (e.g. for JPROC):
        1. ./run.jproc >& jproc.log &
    18. Check the JPROC log file to ensure complete and correct execution. Then cd to $WORK/icon and follow the same procedure; invoke ./bldit.icon.Linux, followed by ./run.icon >& icon.log &. This will produce the first (profile) dataset for the first run of CCTM on the coarse domain. After CCTM finishes, you will need to generate a nest dataset for the fine domain.
    19. Follow this procedure for BCON and CCTM.
    20. Finishing with CCTM, you should have a complete collection of datasets, which you can compare with the distribution datasets in DATA_REF.CMAQv5.0.1.tar.gz. Unless you modify the run scripts, the output data from all the models will reside in the following (automatically generated) paths:
      1. $M3DATA/ bcon/ cctm/ icon/ jproc/
    21. Concerning parallel CCTM operation: We have tested the “bldit” script for both serial and parallel compilation. The source code is the same for both. Only some libraries are different as well as the run scripts. The “stenex” library for parallel is different than for serial; “pario” is needed only for parallel. This release was set up and tested for a “standard” MPICH linux cluster, requiring the addition of a C code that distributes the run time environment from the node that launches the run to the other participating nodes. Thanks to Bo Wang and Zion Wang of CERT-UC-Riverside, who developed and tested this code. Also, see the PARALLEL_NOTES readme file. (Note: The initial concentrations pre-processor, ICON can also be executed in parallel, but we have not tested this for Linux clusters.)
    22. Concerning parallel CCTM operation (to run the model on muliple processors):
      1. Modify the bldit.cctm linux script as follows and build the multiple processor version of CMAQ:
      2. 40c40 < set APPL = e1a --- > set APPL = e3a 49c49 < #set ParOpt # set for multiple PE's; comment out for single PE --- > set ParOpt # set for multiple PE's; comment out for single PE
      3. Then modify the run.cctm script as follows:
      4. 4c4 < # Usage: run.cctm >&! cctm_e1a.log & --- > # Usage: run.cctm >&! cctm_e3a.log & 19,20c19,20 < set APPL = e1a < set CFG = Linux2_x86_64pg --- > set APPL = e3a > set CFG = Linux2_x86_64pg 28,29c28,29 < setenv NPCOL_NPROW "1 1"; set NPROCS = 1 # single processor setting < #setenv NPCOL_NPROW "4 2"; set NPROCS = 8 --- > #setenv NPCOL_NPROW "1 1"; set NPROCS = 1 # single processor setting > setenv NPCOL_NPROW "4 2"; set NPROCS = 8 121c121 < #set GC_ICfile = CCTM_e1aCGRID.d1b --- > #set GC_ICfile = CCTM_e3aCGRID.d1b 195c195 < time $BASE/$EXEC --- > #time $BASE/$EXEC 198,201c198,201 < #set MPIRUN = /share/linux/bin/mpich-ch_p4/bin/mpirun < #set TASKMAP = $BASE/machines8 < #cat $TASKMAP < #time $MPIRUN -v -machinefile $TASKMAP -np $NPROCS $BASE/$EXEC --- > set MPIRUN = /share/linux/bin/mpich-ch_p4/bin/mpirun > set TASKMAP = $BASE/machines8 > cat $TASKMAP > time $MPIRUN -v -machinefile $TASKMAP -np $NPROCS $BASE/$EXEC
      5. Note: You can change the default script by using the Unix “patch” utility. Cut the indented section listed above into a file, say “mod.” Then type “patch run.cctm mod.”
    23. Other details:
        1. You can check output ioapi file headers (and data) using the netCDF utility ncdump. This utility will be located in the same place as netcdf, mentioned in (4) above.
        2. The GRIDDESC file contains horizontal projection and grid domain definitions that are required input for many CMAQ models. The run scripts for ICON, BCON, and CCTM contain environment variables that point to the GRIDDESC file.
          1. The horizontal grid definition can be set to window from the met and emissions input files. However, the window must be a “proper subset” (i.e., a subset from the interior of the domain and not including boundaries). Note: The domains represented by the met and emissions data must be the same.
        3. Running CCTM for a windowed domain or a higher resolution nested domain from larger or coarser met and emissions datasets requires creating initial and boundary data for the target domain using ICON and BCON.
    24. Comparing with References
      1. The fastest way to compare is using the NetCDF Operators (http://nco.sourceforge.net) and the PseudoNetCDF python library (http://pseudonetcdf.googlecode.com). The commands below will display the minimum and maximum ratio for each variable in the dataset. Add the -v <VARNAME> option to the pncdump commands or ncbo command to apply to just one variable.
        1. ncbo --op_typ="/" newfile.nc oldfile.nc ratiofile.nc pncdump -r ROW,min -r COL,min -r LAY,min -r TIME,min ratiofile.nc pncdump -r ROW,max -r COL,max -r LAY,max -r TIME,max ratiofile.nc
    25. Or make a figure of fractional bias as a function of concentration
        1. python -ic "from netCDF4 import Dataset newf = Dataset('newfile.nc') oldf = Dataset('oldfile.nc') newo3 = newf.variables['O3'][:] * 1000 oldo3 = oldf.variables['O3'][:] * 1000 fbo3 = 2 * (newo3 - oldo3) / (newo3 + oldo3) import numpy as np dc = 5 lbs = np.arange(0.0, 120, dc) ubs = lbs + dc ubs[-1] = np.inf mbs = lbs + dc / 2. fbs = [] for lb, ub in zip(lbs, ubs): thisfb = np.ma.masked_where(np.logical_or(oldo3 > ub, oldo3 < lb), fbo3).compressed() fbs.append(thisfb) from matplotlib import use; use('Agg') import pylab as pl pl.boxplot(fbs, pos = mbs, whis = np.inf) pl.xticks(lbs.tolist() + [lbs.max() + dc], ['%.0f' % x for x in lbs.tolist() + [ubs.max()]]) pl.ylim(-2, 2) pl.setp(pl.gca().get_xticklabels(), rotation = 90) pl.savefig('fracbias_o3.png') "