This is an updated edition of an informal NCOM user's guide for NCOM 1.3. If you have any suggestions for improvements (e.g., something could be clearer) or anything that you can add (e.g., how to run ncom on a specific computer system or fix a problem) to help other users, please send them to me. Thanks. -- Paul Martin email: martin@nrlssc.navy.mil phone: 228-688-5447 && NCOM USER'S GUIDE && Date of this revision: 10-20-2000. This revision is for the current version of ncom which has been modified by Alan Wallcraft to allow running on different numbers of processors without the need for recompiling (though running on one processor vs multiple processors does require a recompile), and has been modified by Paul Martin to improve the nesting capability and allow the use of higher-order finite differencing. The major differences from NCOM 1.0 are (1) the file structure for the model code has been significantly changed and (2) the Fortran code for the model has been changed to fix some bugs, allow for the optional use of higher-order finite differences (on an experimental/trial basis), and to provide better support for nesting. The setup routines have been modified to provide better support for adjusting the bathymetry and for setting up nested grids. The file structure for the model code is more complicated than before, but users should not have to concern themselves with most of this, and the task of porting the model to different machines should be a bit clearer and easier than with version 1.0. Note that the input files have not been changed except for OPARM_1.D, the input parameter file. This file has 7 additional input parameters. The 1st 6 follow parameter "indrag" and are numerical option flags to allow selecting different finite difference schemes for particular calculations. The 7th new parameter is a nesting flag (nsto(7) to indicate whether or not to use feedback of T and S fields from the fine grid to the coarse grid when doing nesting. Because of some bugs, river inflows and open bc weren't working properly when running on multiple processors on the previous version of the code (sorry about this). They should work now; however, see the section on "Testing by users" below. Note that nesting is not yet running on multiple processors - though it should be soon. && Testing by Users: IMPORTANT! READ THIS! IMPORTANT! READ THIS! We are trying to make ncom as robust as possible. However, for the time being, every new application is a new "test" for the model code, especially as regards running on multiple processors. It is difficult for us to test the model on all the combinations of domains, options, computers, numbers of processors that might be used. Eventually, most of the bugs should be gotten out. However, in the mean time, it is important to test your domain to see that it gives exactly the same answers (on the same machine) for: 1. Running on multiple processors vs running on 1 processor. 2. Using shrinkwrapping vs not using shrinkwrapping. 3. A restart. The easiest way to see if the model is giving identical answers is to check the residual "rrtol" that is printed out every timestep by the cgssor solver used for the free surface. This residual is very sensitive to any change in the model solution. Any change in the last digits of "rrtol" indicates that the solution is NOT bit-for-bit identical. If the rrtol values are identical after 10-20 iterations, the solutions are probably identical. You can run longer to be sure. If you are not getting identical answers, check your setup and try to see if everything is set up OK. If you can't find out why the solutions are different, contact Paul Martin and we will try to find the problem. To check the restart, set the model to do a restart at some interval, say 24 hrs. Then run 36 hrs. Then restart at 24 hrs and compare the rrtol values at 36 hrs with those of the original run at the same time. && General ncom info: NCOM is set up so that the main model program should not require any (or at least very many) changes to run a particular simulation. To make this possible, almost everything needed for a model simulation is passed in via input files. A setup program (ncom_setup) is needed to generate the input files. This program is considered to be in the domain of the user, i.e., the setup program must be modified by the user to set up a particular simulation. It is recommended that the user modify one of the existing setup programs that are available. Contact Paul Martin if some assistence with this is needed. There may be an existing setup program similar to your needs. Most of the model input files are read and written by the subroutines in file ncom1rwio.F. The same subroutine is used to read and write a particular file so that the code for reading and writing the file is in the same place and can more easily be kept consistent. The input/output files are either IEEE binary or ascii files and should be fully portable to the different computers that we normally use. Note that all that is required to "run" a particular simulation are the model executable, the model run script, and the model input files. Hence, it may be convenient to set up the input files and look at the model output on a computer other than the one on which the model itself is being run (e.g,. where interactive plotting is available to make it easier to inspect the fields). && General directory structure: The main model directory (as currently set up) contains subdirectories for (1) the model code(s) and (2) individual applications (simulations). A listing of the subdirectories on the main model directory might typically show something like: ncom_1.2 - subdirectory for code for ncom version 1.2 ncom_1.3 - subdirectory for code for ncom version 1.3 run_glb - subdirectory for global simulation run_med - subdirectory for Mediterranean simulation run_gom - subdirectory for Gulf of Mexico simulation In this example, there are two subdirectories containing different versions of the model, and three subdirectories containing the files for specific model applications. && Structure of model code subdirectory. The model code subdirectory (ncom_*) contains all the files needed to generate the ncom executable. The structure of this directory may seem a bit complicated, but the user should not have to deal with most of it. Makefile - top level Makfile for NCOM Makefile_ncom_t3e_one - log file from making an ncom executable. Makefile_ncom_t3e_shmem - log file from making an ncom executable. READMEs/ - directory of readme documentation/explaination files: README.config - description of config files README.include - description of include files README.macros - description of macros. README.make - description of makefiles. README.xmc - description of Alan's communication routines. bin/ - directory for ncom executable(s). config/ - configuration files used for compiling ncom code. each file is set up for a specific machine arcitecture (ARCH), and a specific parallelization strategy (TYPE). include/ - all ncom include files that are included via cpp (note that these now use suffix *.h rather than *.inc): CAF.h - Co-Array Fortran I/O COMMON.h - Common blocks for NCOM HEADER_MPI.h - MPI header on generic machine HEADER_MPI_AIX.h - MPI header on IBM SP HEADER_MPI_T3E.h - MPI header on Cray T3E MACROS.h - macros for customizing NCOM NCOMPAR.h - Common blocks for NCOM subroutine omodel PARAM.h - Compile time constants for NCOM README.include - Help file for incldes. README.macros - Help file for macros in MACROS.h lib/ - directory for ncom compiled libraries: libncom_one.a - compiled library of main ncom subroutines. libnone_one.a - compiled library of dummy computer specific routines. libpdum_one.a - complied library of dummy NCAR plotting subroutines. libutil_one.a - complied library of Alan's communication subroutines. libsrc/ - directory of all ncom Fortran subroutine files. Makefile - Make compiled libraries containing collections of ncom Fortran files and put libraries on /lib. ncom/ - directory of ncom main Fortran subroutines. Makefile - make compiled library (libncom_*.a) ncom main routines. ncom1.F - all of old ncom1.F except driver module (now in ncom.F on directory src/ncom/. ncom1baro.F - free surface calculations. ncom1init.F - initialization routines. ncom1nest.F - interp bc for and provide feedback from nested grid. ncom1obc.F - open boundary conditions. ncom1plib.F - generic routines from Paul Martin's library "plib". ncom1rwio.F - read and write input files. ncom1sbc.F - surface forcing routines. ncom1tide.F - tidal calculation routines. ncom1updt.F - main update routines for u, v, T, S. ncom1util.F - utility routines used for testing, etc. ncom1vmix.F - vertical mixing. none/ - directory for dummy computer-specific routines. Makefile - make library (libnone_*.a). kinergy.F - subroutine to calculate volume ave kinetic energy. ncom1none.F - dummy subroutine used to simplify makefile logic - used for computers that use no specific Fortran utility routines. pdum/ - Makefile - ncom1pdum.F - dummy plotting routines for ncom when interactive NCAR graphics are not available. r10k/ - Fortran routines specific to SGI Origin 2000. ? Makefile - wtime.c - Alan's ncom routine to calc wall time on SGIs. zunder.c - Alan's ncom routine to flush underflows to zero on SGIs. setup/ - general routines for doing a setup for a simulation. Makefile - ncom_setup_plib.F - general routines for setting up a simulation. ncom_setup_spln.F - spline interpolation routines from D. S. Ko. sunw/ - Fortran routines specific to Sun Ultra 2 workstations. ? Makefile - wtime.c - Alan's ncom routine to calc wall time on Sun systems. util/ - dirctory of Alan's communication routines both for shared memory (sm) and muli-processor (mp) computing. Makefile - make compiled library (libutil_*.a) Alan's com routines. README.xmc - xmc.F - xmc_mp.F - xmc_sm.F - za.F - za_mp.F - za_sm.F - src/ - ncom/ - directory for ncom driver and makefile to make executable. Makefile - compile ncom.F, link executable and put on /bin. ncom.F - main driver routine for ncom. test_xca - && Structure of simulation subdirectory. The simulation or application directory contains the files specific to a particular simulation. Note that all the files listed here are not needed if the model input files already exist (i.e., if the model input files do not have to be created by running program ncom_setup). All that is needed to run a simulation are a run script, the model input files, and a model executable (the model executables are stored on the model directory). The other files here are involved in making and running the ncom_setup program to generate model input files. Makefile - make executable for program ncom_setup. README.make - readme file that discusses how to run model setup program to generate model input files. clean.u - script to delete libraries, executables, and soft links to model directory. del_links - script to delete soft links to model directory. make.u - script to run ncom_setup.com make_links - script to make soft links to model directory. ncom_setup.com - script to make executable for model setup. ncom1_sunU2_one.com - run script to run ncom simulation. ncom1_o2k_one.com - run script to run ncom simulation. ncom1_o2k_shmem_04.com - run script to run ncom simulation. input/ - directory of input files for particular simulation. output/ - directory of model output files. restart/ - directory of model restart files. src/ - directory for source code for a particular setup. Makefile - intermediate makefile for making ncom_setup executable. setup/ - directory for source code for ncom_setup. Makfile - makefile to make ncom_setup. ftn.com - script to run program "ftnchek" to check ncom_setup fortran. ftncheck is available on Suns for checking fortran code. ftn.log - output from ftnchek. ncom_setup.F - main ncom_setup program. This contains setup routines that are specific to a particular model setup. Note that general model setup routines are stored on the model directory on subdirectory libsrc/setup/ && General procedure for running ncom: 1. Make the ncom executable (on the model code subdirectory). 2. Set up a particular simulation (on the simulation subdirectory). 3. Run the simulation (on the original simulation subdirectory or some other). && 1. Making the ncom executable: In the following discussion, ncom_* will refer to the subdirectory containing the model code. We have tried to provide here the basic information needed to make an ncom executable. There are help files on directory ncom_*/READMEs that provide more discussion about the various aspects of making the ncom executable. Before making the ncom executable, some parameters in files MACROS.h and PARAM.h on the ncom_*/include subdirectory need to be checked and/or set. Set halo width "nmh" in file ncom_*/include/PARAM.h. Note - the dissusion here regarding values of nmh reflects "planned" changes. For now, compile with nmh=2.!!!!!!!!!!!!!!!!!!!!!!! Halo width generally ranges from 0 to 2. The halo width must be wide enough for the particular model code and application used. Having a halo wider than needed should not cause any problems, but has the (small) penalty that the model takes up slightly more memory than needed. Halo width requirements are: 2nd-order code, no periodic boundaries, single processor: nmh = 0. 2nd-order code, no periodic boundaries, multiple processor: nmh = 1. 2nd-order code, periodic boundaries: nmh = 1. 4th-order code, no periodic boundaries, single processor: nmh = 1-2. 4th-order code, no periodic boundaries, multiple processor: nmh = 2. 4nd-order code, periodic boundaries: nmh = 2. Set maximum allowed dimensions in file ncom_*/include/PARAM.h. These variables are used to dimension some scratch arrays in the model code. The model does not use a lot of these scratch arrays, so the penalty in memory usage for setting these values larger than necessary is small. nrmx - maximum allowed number of scalar fields (nr), usually =2 (T & S). nqmx - maximum allowed number of turbulence fields (nq), usually =2. nobmx - maximum allowed number of open boundary points. ntcmx - maximum allowed number of tidal constituents. nrivmx - maximum allowed number of river inflow points. mxgrdso - maximum allowed number of ocean model grids (nests). See file ncom_*/READMEs/README.include for more discussion. Set MACRO values in file ncom_*/include/MACROS.h MXPROC maximum number of processors (<= MX1PRC**2) MX1PRC maximum number of processors in either direction (x or y) MN1PRC minimum number of processors in either direction (x or y) NMXA maximum whole array dimension in either direction (x or y) LMX maximum vertical array dimension (max number of layers + 1) Again, setting dimensions NMXA and LMX larger than needed only incurs a small penalty in terms of memory requirements. It is recommended that all maximum dimensions be set to the largest values needed to avoid needing to recompile. Other MACRO variables in MACROS.h are either for particular tests or applications or are not generally needed to be modified by the user. See file ncom_*/READMEs/README.macros for more discussion. Make the ncom executable file. On the model code subdirectory (ncom_*) type, for example make ncom ARCH=t3e TYPE=shmem or type make ncom ARCH=t3e TYPE=shmem >& Make_ncom_t3e_shmem & if you want a log file containing a record of the compilation. Here, ARCH defines which machine architecture the make is running on (E10K, o2k, sp3, sunU2, t3e) and TYPE is the parallelization strategy (one, auto, mpi, mpisr, shmem, caf). See file ncom_*/READMEs/README.config for the available ARCH and TYPE combinations, and how to create new configurations. The ncom libraries that are created are stored on ncom_*/lib, and the ncom executable that is generated is stored on ncom_*/bin. To delete libraries and executables for a particular TYPE, use: make clean ARCH=t3e TYPE=shmem To delete all libraries and executables, use: make clean_all && 2. Setting up (generating input files for) a particular simulation. The steps here are: Set up a subdirectory for a particular model simulation (call it run_sim for the purpose of this discussion). The easiest way to do this is copy an existing model simulation directory, along with its subdirectories (preferably for a similar type of simulation), and then modify the files that need to be changed for the particular simulation. The directory structure for the simulation directory is discussed above. Modify the model input parameter file "OPARM_1.D" on the "input" subdirectory for the particular simulation to be run. There is a discussion of the input parameters in file OPARM_1.D below. The other files on the input subdirectory will be replaced when the setup program is run. Modify the setup program "ncom_setup.F" on subdirectory "src/setup" for the particular simulation to be done. Edit script "make.u" on directory run_sim to set the computer architecture (ARCH) being used for the setup and the model directory (i.e., model version) being used. Check model directory ncom_*/config for the presence of a configuration file $(ARCH)_setup for the particular computer (ARCH) on which the setup program is being run. This file can be easily created if it does not exist - see discussion just below. Type "make.u" to make the ncom_setup executable. Edit the ncom_setup run script "ncom_setup.com" to make sure the input and output files are defined and that the script is appropriate for the computer being used. Type "ncom_setup.com" to run the ncom_setup program and generate the model input files. See file run_sim/README.make for some additional discussion. Note that general setup subroutines (i.e., not specific to a particular simulation) are stored on the model directory on subdirectory libsrc/setup. The file ncom_setup_plib.F on this subdirectory contains a number of general subroutines that can be used to help set up a particular simulation. When ncom_setup is made, the general libraries that are needed are compiled and put on the model subdirectory ./lib. These libraries all contain the prefix _setup.a The setup program can only be run on a single processor. Hence, all the routines used for the model setup are compiled to run on a single processor. The compilation of the setup routines uses a special configuration file on the model directory on subdirectory config which has the prefix _setup. If such a file does not exist for the computer being used, it can be generated from the configuration file for a single processor for the computer being used by making only a single change: the macro option -DSETUP must be added to the list of CPPFLAGS. && 3. Running a particular simulation. For this discussion, the simulation directory will be called run_sim. The simulation can be run on the original model simulation subdirectory or another one, e.g., set up on another computer or on a scratch directory. All that is required to run a simulation are (i) run script, (ii) model executable, and (iii) model input files. Some examples of run scripts are on directory run_scripts. The run scripts for the new model configuration are about the same as the old ones. A difference is that an additional input file, spmd.D_n (where n is the number of processors being used) is needed. The parameters defined in spmd.D_n are: ipr = number of processors being used in x dimension of model. jpr = number of processors being used in y dimension of model. jqr = total number of processors being used = ipr*jpr. iprsum = number of subdomains in x used for summing values over the grid. iprsum must be a multiple of ipr, and the x-dimension of the model must be a multiple of iprsum. jprsum = number of subdomains in y used for summing values over the grid. jprsum must be a multiple of jpr, and the y-dimension of the model must be a multiple of jprsum. The simplest thing is to set iprsum=ipr and jprsum=jpr. This is the most efficient. Larger values of iprsum and jprsum are used when checking bit-for-bit reproducability of results on different numbers of processors, since changing the values of iprsum and jprsum will change the summing interval used, which may change the roundoff error and the results. The spmd.D_n files are generally located with the other model input files since the values defined in them are somewhat grid-dependent. The model input files are by default located on directory run_sim/input, but another location can be pointed to in the run script. Similarly, the output and restart files are by default put on directories run_sim/output and run_sim/restart, but can be put elsewhere as defined in the run script. The model executable is copied into the simulation directory from wherever it resides. Note that the executible is specific to the computer being used and to the type of communication being used if running on more than one processor. The executable is also specific to the choices made in defining parameters in the MACROS.h and PARAM.h files prior to the compile. && Model input files: Special routines are provided to read and write 2-D and 3-D arrays for use in multi-processor (MP) environments with distributed memory. These parcel out input arrays to the appropriate processors, and gather output from the processors into a single output file. Some specific filename extensions are used to denote the type of file: *.A - contains 2-D and 3-D array data that is parcelled out to the processors. *.B - scalar data associated with the *.A file, e.g., the date and time. *.D - data files that are read in full by each processor. The *.A and *.B files are in unformatted binary. The *.D files may be binary or formatted ascii. The input files also have a numerical designation to denote the model grid (nest) for the data, e.g., file OPARM_1.D is the input parameter file for grid number 1 (the main grid). The data for each grid (nest) is handled separately, so each grid or nest must have a complete set of input files and will produce its own set of output files. There are different input files for different inputs. In general, if an option to use a particular type of input data (e.g., surface forcing, tides, open boundary values, or river inflow) is not used, an input file for that data will not be read and does not need to exist. If a particular input data file is not used (e.g., open boundary data, tidal data, river inflow data), the flag corresponding to that input data in the input parameter file (OPARM_n.D) may need to be set to the appropriate value (usually =0) so that the program does not try to read that data from an input file. OPARM_n.D - input parameters and options. odimens.D - grid and array dimensions for all the grids (nests). ohgrd_n.A - array data for horizontal grid. ohgrd_n.B - scalar data for horizontal grid. oinit_n.A - array data for initial conditions. oinit_n.A - scalar data for initial conditions. opnbc_n.D - data for open boundaries. orivs_n.D - river inflow data. osflx_n.A - array data for surface forcing fields. osflx_n.B - scalar data for surface forcing fields. otide_n.D - input tidal data. otscl_n.A - array data for T-S climatology. otscl_n.B - scalar data for T-S climatology. otsss_n.A - array data for specified surface T and S values. otsss_n.B - scalar data for specified surface T and S values. otsza_n.A - array data for horizontally averaged T and S fields. otsza_n.B - scalar data for horizontally averaged T and S fields. ovgrd_n.D - data specifying vertical grid spacing. stop.D - stop file, used to pause an interactively run simulation to allow inspection of model fields. spmd.D_n - parameters dependent on number of processors being used (see discussion above). Input files that have time-varying input, such as surface forcing fields, are labeled with an 8-digit integer date of the form YYYYMMDD and an 8-digit integer time of the form HHMMSSCC where CC indicates hundredths of seconds. If the data in a time-varying type of input data file is provided only at a single time (i.e., is fixed in time), set the date and time integers to zero. For seasonal or monthly varying climatological data, set the year (YYYY) to zero and set the month and day according to the time of year of the field. The model will automatically cycle through time-varying input fields and interpolate the fields linearly in time to the current time of the model. Real-time input fields can cover a greater period of time than that covered by the model run, but must at least cover the time period of the model run. Climatological fields will automatically be cycled around at the end of the year. Input data specified only at a single time will remain fixed. && Model output files: Some model output data files are provided, e.g., for surface fields and for 3-D fields. The model output files are written from subroutine "output". The user may wish to modify the model output for their own needs. The model subroutines in file ncom1rwio.F for reading/writing output files can be modified, or the user can output fields in a format to their own liking. For MP use, array data from the different processors must be gathered, hence the user may wish to modify the existing output subroutines in ncom1rwio.F that do this. Output files currently provided are: out3d_n.A - array data for 3-D output fields. out3d_n.B - scalar data for 3-D output fields. outsf_n.A - array data for 2-D surface output fields. outsf_n.B - scalar data for 2-D surface output fields. Post-processing programs can use the subroutines in ncom1rwio.F to read the output files by linking ncom1rwio.o and a couple of support libraries. See file ncom_images.u for an example. The time that is included with the output fields is the elapsed time in days since the start of the model run. This can be combined with the model start time (in OPARM_n.D) to obtain the actual time and date of the fields. && Input parameters in file OPARM_n.D model - name of model (NCOM1) being used. expt - name of experiment. domain - name of domain. Note that these three character variables are provided for the user's convenience (e.g., for labeling output) and are not currently used for anything in particular within the model code. Run control................................................................... idate - start date for run of form YYYYMMDD. itime - start time for run of form HHMMSSCC where CC denotes hundredths of s. rstart - logical flag to denote restart. batch - logical flag to denote batch run (no interactive I/O). tothrs - total length of run in hrs. Output control................................................................ outo(1) - frequency for output of restart file (hrs). outo(2) - frequency for output of 3D fields (hrs). outo(3) - frequency for output of surface fields (hrs). outo(4) - frequency for output of x-y sections (hrs). outo(5) - frequency for output of x-z sections (hrs). outo(6) - frequency for output of y-z sections (hrs). Note that output parameters 4-6 are not currently actually used, and are provided for the use of the user for modification of the model output. Physical options.............................................................. mode - mode of the ocean model calculation (as defined in POM): =1 Run as a local mixed layer model. This option is not implemented, but ncom can be run at a single horizontal pt, i.e., as a local mixed-layer model, by setting the horizontal grid dimensions to 1, and by setting parameter indcyc below to "6" to provide periodic bndys in both x and y. =2 Run as barotropic, single layer model. This option is not implemented, but the full model will run with a single layer by setting the vertical grid dimensions l=ls=2. =3 Run full, prognostic, 3-D model. =4 Run model in diagnostic mode. With this option, all scalar fields are kept at their initial values. indcor - control setting of Coriolis parameter: =0 Set Coriolis parameter to zero. =1 Set to constant over whole domain corresponding to mean latitude. =2 Spacially variable based on local latitude (this is the normal setting). indden - formula used for density calculation: =1 Fredrich-Levitus (1972) polynomial formula. This is not implemented as a function of pressure within the sigma layers for the sake of efficiency. Derived for seawater, so accuracy may be low in fresh water. It is fast. Probably a better fast density calculation should be found. =3 The UNESCO formula as adapted by Mellor (1991) for POM. This is expensive (50+ operations per pt) and adds about 18% to the model run time. But it should have good accuracy. indadv - turn momentum advection off (=0) or on (=1). indadvr- turn scalar advection off (=0) or on (=1). Note that on the sigma grid, since the layer depths change because of the free surface, the advection is an integral component for maintaining conservation, i.e., if advection is turned off, the field in question will not be exactly conserved on the sigma grid if the surface elevation changes. indxk - type of horizontal diffusion to be used: =0 none. =1 Laplacian mixing using a specified grid-cell Reynolds Number criteria (xkre) to scale the diffusion. Minimum mixing coefficients in x and y are set by xkmin and ykmin, respectively. =2 Smagorinsky Laplacian mixing as used in POM except that cross terms for momentum diffusion are not included and the horizontal averaging used for momentum diffusion is slightly different. This is scaled by parameter "smag" (referred to as "horcon" in POM), which is set below. A minimum mixing coefficient is set by xkmin below (ykmin is not used). Note that use of Smagorinsky with smag=0.1 tends to give results similar to the grid-cell-Re scheme with a value of xkre of ~ 15-20. indzk - type of vertical mixing to be used. =1 constant values (values set by zkmmin and zkhmin below). =2 Mellor-Yamada level 2 scheme. =3 Mellor-Yamada level 2.5 scheme. =4 Mellor-Yamada level 2.5 scheme with option for Craig and Banner (1994) treatment of surface TKE flux and length scale (see parameter indtkes just below). Note that use of the Mellor-Yamada level 2.5 scheme (indzk = 3 or 4) requires the use of Mellor's routine for calculating density (indden=3). indtkes - choose surface TKE bc when indzk=4: =1 specify surface value of TKE (scaled as [surface friction velocity]**2). =2 specify surface TKE flux (scaled as [surface friction velocity]**3). indlxts - relax T and S at depth to input climate fields. Timescale is given by parameter rlax_ts below. Depth scale is hardwired in subroutine relax_ts to 500 m (the relaxation rate increases with depth so as to have reduced impact in the dynamically active upper layers). =0 no =1 yes indext - solar extinction: =0 none. =1 use Jerlov optical types as indicated by parameter indtype. indtype - Jerlov optical water type to be used: =1 type I =2 type IA =3 type IB =4 type II =5 type III Note that the solar extinction within ncom is stored in a 3-D array (ext) so that full spacial and temporal variability could be accommodated. I/O for such variability is not currently provided, but could be set up by the user. All solar radiation penetrating the upper surface of the bottom layer is absorbed within the bottom layer. indbio - call biological model. This requires increasing the number of scalar fields (nr) to include the biological constituents, and to provide initialization, surface fluxes (default is zero surface flux, which may be appropriate in most cases), and boundary conditions for these fields. A simple, 4-constituent, biological model is provided in subroutine biology, but this could easily be upgraded. Ask Paul Martin for more info about running this option. bclinic - logical flag to turn on/off horizonal baroclinic pressure gradients. curved - logical flag to turn on/off advective curvature correction term for a curvilinear grid. This should be set "true" when a curvilinear grid (i.e., a non-cartesian grid such as a longitude-latitude grid) is used and should be set "false" when a cartesian (non-curved) grid is used. noslip - logical flag for noslip momentum boundary conditions at land-sea boundaries: =0 use free slip; =1 use no-slip. sigdif - logical flag to subtract climate values from scalar fields when performing horizontal diffusion along sigma layers, i.e., to diffuse the "anomaly" relative to the climate value. This prevents much of the effective vertical diffusion of scalar fields that can occur along sloping sigma layers, though it can cause other problems when the local values of the scalar field are much different from the climate values provided, e.g., downwelling along a coast will tend to cause warming of the water on the edge of (e.g., inshore of) the downwelling area because of the warm "anomaly" in the downwelling area. Generally, however, the latter problem is less than the former problem and sigdif is usually set = .true. These problems are reduced as grid resolution is increased and horizontal mixing is reduced and/or bottom slopes are reduced. largmix - logical flag to use Large et al., (199?) Richardson-Number-dependent background mixing for Richardson numbers above critical but less than 0.7. Currently only implemented when using Mellor-Yamada Level 2 mixing (indzk=2). wetdry - logical flag for wetting and drying (NOT IMPLEMENTED). tidpot - logical flag for forcing with the local tidal potential. This is needed when the tides are calculated in moderately large (> 300 ? km) and larger domains. If this is used, model subroutine tidepot must be modified to provide the proper tidal potential calculation for the tidal constituents used. We have not yet got around to writing a general tidal potential routine. Physical parameters........................................................... rho0 - reference density for seawater (kg/m3). g - gravitational constant (m/s2). cp - specific heat for seawater ( ). ramphrs - length of ramp function for ramping baroclinic pressure gradients, surface forcing, boundary forcing, etc. (hrs). xkmin - minimum horizontal momentum mixing coef in x for grid-cell Reynolds number mixing scheme (m2/s), and also a minimum horizontal mixing coefficient for the Smagorinsky scheme. ykmin - minimum horizontal momentum mixing coef in y for grid-cell Reynolds number mixing scheme (m2/s), usually set same as xkmin. xkre - maximum horizontal grid-cell Renolds Number for grid-cell Reynolds number mixing scheme. smag - scaling constant for Smagorinsky horizontal mixing scheme (same as parameter "horcon" in POM). prnxi - inverse Prandtl Number for horizontal mixing (a value smaller (larger) than 1.0 will reduce (increase) the mixing for scalar fields proportionally). zkmmin - minimum vertical diffusion coef for momentum (m2/s). zkhmin - minimum vertical diffusion coef for scalar fields (m2/s). zkre - maximum vertical grid-cell Renolds Number (only used with MYL2 mixing, i.e., with indzk=2). cbmin - minimum value for bottom drag coefficient. botruf1 - bottom roughness (m) if constant value is used. This is used to calculate the bottom drag coefficient and assumes a logarithmic bottom boundary layer (same as POM). rlax_ts - timescale for relaxation of deep T and S fields to input climate values (hrs). Numerical options............................................................. itermom - number of iterations of baroclinic momentum and free surface equations. These can be iterated (with itermom > 1) to eliminate the small inconsistency that exists in the calculation when itermom = 1, but this is time consuming and not usually necessary. indbaro - method of solution used for free surface mode: =1 full explicit, the model is run with a single (very small) explicit timestep. This is not usually done since it is very time consuming. =2 semi-implicit. This is the usual choice. =3 split-explicit. NOT IMPLEMENTED (this is the free surface scheme used by POM). indsolv - solver used to calculate the semi-implicit free surface: =1 cgssor conjugant-gradient solver (usual choice and most efficient). =2 SOR solver (Gauss-Seidel) (not yet parallelized). =3 SOR solver (Jacobi) (not parallelized), gives slow convergence, but symmetric solutions, which are useful (or necessary) for some model tests. indrag - bottom drag calculation used only when model is run with a single layer (i.e., with l=ls=2): =1 explict bottom drag. =2 pseudo implicit. =3 fully implicit (best choice). ifdadrh - hor adv of scalars: =2 2nd order, =3 upw3 (quasi 3rd-order upwind) ifdadrv - vert adv of scalars: =2 2nd order, =3 upw3. ifdaduh - hor adv of momentum: =2 2nd order, =3 upw3. ifdaduv - vert adv of momentum: =2 2nd order, =3 upw3. ifdpgrd - hor baroclinic pressure gradient: =2 2nd order, =4 4th order. ifdcor - hor interpolation of Coriolis terms: =2 2nd order, =4 4th. botrun - truncate bottom grid cell to bathymetry rather than rounding the bathymetry to the nearest z-level (planned but NOT IMPLEMENTED). forward - use forward scheme on 1st timestep of a cold start (this is the usual choice - use of leapfrog on 1st timestep would require a special initialization of the model fields at the previous time level). For now, always set forward = true. Note that for restarts, fields at two adjacent timesteps are stored, and restarts automatically always use a leapfrog 1st timestep. vector - use vectorizable subroutines. Always set vector = true. shrnkwp - use shrinkwrapping. shrinkwrapping can save considerable time on a single processor, depending on the configuration of the domain, but may not be of much use for MP (not currently allowed for MP). Results with and without shrinkwrapping should be identical. Numerical parameters......................................................... dti - internal model timestep (s). The maximum timestep is governed by the maximum advective and internal wave speeds. For numerical stability, a parcel of fluid cannot be advected or a wave cannot propagate more than one grid distance in one "leapfrog timestep" (= 2*dti). A rough estimate for an allowable timestep is to take the horizontal grid spacing in km and multiply by 120, e.g., for a 1-km grid, use dti=120s. This timestep should be stable for most internal wave and horizontal advective speeds encountered in the ocean. However, with high vertical resolution with thin layers vertical advection may limit the timestep. The advective CFL values (2*dti*velocity/grid_spacing) are printed periodically by the model to allow them to be monitored. dte - external timestep (s) for split-explicit scheme (NOT USED). asf - Asselin filter coefficient for temporal filtering to prevent timesplitting with the leapfrog temporal integration scheme. A typically used value is asf=0.05. eg1,eg2 - temporal weights for surface elevation gradient in the barotropic momentum equations at the new and central time levels. Usually use eg1=0.5 and eg2=0.0, which gives an even weighting between the new and old time levels (i.e., zero weighting for the central time level) vg1,vg2 - same as eg1 and eg2, but temporal weights for the divergence terms in the depth-averaged continuity equation. eg1=vg1=0.5 and eg2=vg2=0 are the normally used values and these provide the minimum numerical damping of surface waves. Surface forcing options and parameters........................................ indsbc - flag to turn on or off all surface forcing. =0 no surface forcing, all the surface flux arrays are set to zero. If indsbc=0, this overrides all the other surface forcing flags. =1 surface forcing is provided as specified by the other surface forcing flags listed immediately below. indatp - surface atmospheric pressure forcing. indtau - surface wind stress forcing. indsft - temperature surface flux. indsfs - salinity surface flux. indsol - solar radiation flux. Each of the surface forcing fields indicated by the above parameters can be obtained from a coupled atmospheric model (if ncom is coupled with an atmospheric model) or from input data. The value to which the parameters are set determines which: =0 neither, no forcing of this type is applied. =1 surface forcing provided from coupled atmospheric model. =2 surface forcing provided from input data file. If there are additional scalar fields besides T and S, their surface fluxes are by default set to zero. If particular fluxes for additional scalar fields are desired, some modification of the ocean model code (subroutine osurfbc) will be required. indcld - not used at this time, but to be used to define if solar forcing is to be calculated from cloud cover data. =0 the solar data is strictly the solar radiation absorbed by the ocean in units of degC-m/s. =1 the solar data is cloud cover and solar radiation must be calculated from the cloud cover values. =2 the solar data is cloud factor (the ratio of the actual solar radiation to the value expected for clear skies) and solar radiation must be calculated from the cloud factor values and an appropriate formula for solar radiation for clear skies. indsst - sst relaxed to specified value via a correction to surface flux with rate defined by parameter rlaxsst (defined below). =0 sst relaxation is not used. =1 sst relaxation is used. indsss - sea-surface salinity (sss) relaxed to specified value via correction to surface flux with rate defined by parameter rlaxsss (defined below). =0 sss relaxation is not used. =1 sss relaxation is used. indsruf - surf roughness calculation: =0 set surface roughness to zero =1 use input data file (not implemented, but easy to add). =2 parameterize from surface windstress using Charnock relation. Note that the fluxes calculated to relax the SST and SSS towards the specified SST and SSS values are added to the specified temperature and salinity surface fluxes obtained from a coupled atmospheric model or from input data within the model. If these latter fluxes are not specified, then the total flux will just be the "relaxation" flux (if any). In general, even if one is relaxing to specified SST values, it is best to provide the most accurate surface heat flux and solar radiation values that are available since (a) this will remove some of the burden of determining the surface heat flux from the SST relaxation, and (b) the solar radiation penetrates below the surface and its effect is not fully accounted for by a surface heat flux. rlaxsst - rate of relaxation of sst to specified value in m/d. rlaxsss - rate of relaxation of sss to specified value in m/d. These parameters specify the rate of relaxation of the model SST and SSS toward the specified values. When multiplied by the difference between the model value and the specified value, they have the units of a surface flux. An advantage of using a flux to relax the SST and SSS (rather than "nudging" the surface layer values in the model) is that the flux provides a timescale for the relaxation that is independent of the ocean model's surface layer thickness, which can vary significantly. charnok - Charnock constant used to calculate surface roughness from surface windstress (surruf = charnok*ustar**2/g). Lateral boundary options...................................................... indcyc - flag to denote periodic (cyclic) boundaries: =0 no periodic boundaries. =4 periodic in x. =5 periodic y. =6 periodic in x and y. Periodic boundaries are very useful for test problems and also allow the model to be run in pseudo 2-D and pseudo 1-D mode. Periodic boundaries used to be implemented using a 3-pt overlap, however, they are now implemented using halos around the boundary of the model grid. With this change, the values of indcyc that denote periodic boundaries have changed from 1-3 to 4-6. indtide - include specified tidal forcing at open boundaries. the requires an input file of tidal data for the open boundary pts consisting of the amplitude and phases of the surface elevation and normal and tangential velocities. Ask Paul Martin about implementing tidal forcing. indobc - flag to denote source of open boundary condition data: =0 no open bndys, all open boundary points are set to land for the model run. This is to provide a means to test run the model in a domain with all the open boundaries closed. =1 use initial values of elevation, velocity, T, and S at boundary points for the full length of the model run. =2 use boundary data from input file. =3 use boundary data from grid in which nested. indobe - open boundary conditions for surface elevation: =1 clamped (elevation specified). This is generally used only when the only forcing is tidal forcing at the boundary. In this case, only tidal elevation data are needed at the boundary, no tidal velocity data are needed. =2 Flather radiation condition. This is the usual open boundary condition used since it can handle radiation of barotropic disturbances out of the model domain. When this boundary condition is used, both the elevation and normal velocity (the depth-averaged normal velocity times the local depth) data at the boundary are needed. indobvb - open boundary condition for depth-averaged tangential velocity: =1 zero gradient, i.e., set same as interior value. =2 Orlanski radiation for outward propagating signals, and relax to specified value (depth-averaged tangential velocity times depth) for incoming signals. =3 advective boundary condition used by Ko. indobu - open boundary condition for normal baroclinic velocity: =1 calculated internally by the model. This calculation includes all the momentum terms except advection and horizontal mixing. =2 Orlanski radiation for outgoing signals and relax to specified value for incoming signals. =3 model calculated value as in (1) plus upwind horizontal advection using the interior values to calculate the advection for outflow and extermal (specified) values to calculate advection for outflow. this may be generally be the best bc for the normal velocity. =4 advective boundary condition used by Ko. indobv - open boundary condition for tangential baroclinic velocity: =1 zero gradient, i.e., set same as interior value. =2 Orlanski radiation for outgoing signals and relax to specified value for incoming signals. =3 advective boundary condition from Ko. indobr - open boundary condition for scalar fields (T and S): =1 set directly from specified value (used with 2-way nesting when there is feedback of the T and S fields to the larger grid). =2 Orlanski radiation for outgoing signals and relax to specified value for incoming signals (generally the best bc for scalars). =3 advective boundary condition (not generally a good choice). =4 advective boundary condition from Ko. Lateral boundary parameters................................................... These are timescales used to relax to specified values when the signal is incoming for the Orlanski radiation boundary condition. They are specified as an e-folding time in model timesteps (this has the disadvantage that the result will be timestep-dependent if the values are not changed when the timestep is changed - might have to change this). rlxobvb - relaxation_timescale/dti for depth-averaged velocity. rlxobv - relaxation_timescale/dti for baroclinic velocity. rlxobr - relaxation_timescale/dti for scalar fields. River inflow parameters....................................................... indriv - governs river inflows: =0 no river inflows (all rivers inflows, if any, are turned off). =1 river inflows (if any) are on. indrivr - defines which scalar river inflow values are to be specified by input data (note that the data may be input (available) but not used). =0 none are to be specified by input data. the local model value is used for river input for T, and S and any other scalar fields are input with a value of zero. =1 T is specified, and S and any other scalar fields are set to zero. =2 T and S are both specified, and any other scalar fields are set to zero. >2 all scalar values are input up to scalar field number "indrivr". The specfied scalar inflow values can be specified at each river inflow pt with a single, depth-independent value, or with values specified at each depth. This is governed by parameter indrivz, which is not defined in the model input file OPARM_*.D, but is read from the river inflow data file and its value is retained internally within the ocean model for reference. Grid nesting parameters....................................................... nsto(1) - grid number (each grid or nest is given an integer value, starting with "1" for the main grid). nsto(2) - grid number of grid in which nested. * | * * nsto(3) - grid nesting ratio, must be integer value. | nsto(4) - timestep ratio, must be integer value. * | * + nsto(5) - i-index of lower-left corner of grid in which nested. | nsto(6) - j-index of lower-left corner of grid in which nested. * | * * These are the indexes of the + sign for the coarse mesh ------- grid point in the schematic at the left which shows * * * the lower-left corner of the nested grid with the grid points on the nested grid denoted by *. The dashed line shows the edge of the coarse mesh grid cell in question. nsto(7) - integer flag to denote feedback from fine mesh (FM) grid to coarse mesh (CM) grid: =0 no feedback. =1 replace T and S on CM with averaged values from FM. Diagnostics parameters........................................................ indiag - level of diagnostic printouts: =0 no diagnostics printed. =1 print just the number of iterations of the implicit free surface solver. >1 more diagnostics printed for higher values. Warning - there is a lot of printout for large grids, which may render this cumbersome to plow through. locate - print out names of subroutines entered (not complete). Note that the model input parameters are read by subroutine "define" in file ncom1init.F. && Parameters in file PARAM.h nmh - halo width. The requirements for setting this variable are discussed above. Halos are needed for communication when running on multiple processors, and when running with periodic boundary conditions Halos are also required when using 4th-order finite differences because mask values are needed at the halo locations, and array values are accessed at the halo locations (even though they are masked to zero). When compiling the ncom_setup program, nmh needs to be zero (the setup program does not use halos). This should soon be taken care of automatically. The maximum allowed array dimensions listed below are used to allocate space for scratch arrays. The use of these maximum allowed dimensions (rather than using dynamic allocation for scratch array space) allows the use of the Fortran 77 compiler for the ocean model. This would seem to circumvent the use of dynamic memory allocation somewhat; however, the scratch array space dimensioned by these parameters is relatively small (most of the scratch arrays needed are dynamically allocated memory space "up front" and are passed through the subroutine argument lists to subroutines where they are needed), and so fairly large maximum values can be allotted to the parameters below without adding much to the memory requirements of the model. nmxa - maximum allowed horizontal grid dimension for latitude and longitude. nmx - maximum allowed horizontal grid dimension on a single processor. lmx - maximum allowed vertical grid dimension (l). nrmx - maximum allowed number of scalar fields (nr). nqmx - maximum allowed number of turbulence fields (nq). nobmx - maximum allowed number of open boundary points. ntcmx - maximum allowed number of tidal constituents. nrivmx - maximum allowed number of river inflow points. mxgrdso - maximum allowed number of ocean model grids (nests). The other parameters in this file should not be changed. Note that nmxa, nmx, and lmx are now set by variables defined in file MACROS.h, i.e., nmxa=NMXA nmx =nmxa/MN1PRC lmx =LMX where NMXA, MN1PRC, and LMX are defined in MACROS.h && Parameters in file MACROS.h MACROS.h contains parameters for customizing the compilation of NCOM. The ncom code is run through a preprocessor before being compiled and macros defined in MACROS.h can be used to set certain options that modify the code that is compiled. The user-defined macros in MACROS.h are: MXPROC maximum number of processors (<= MX1PRC**2) MX1PRC maximum number of processors in either direction MN1PRC minimum number of processors in either direction NMXA maximum whole array dimension in either direction LMX maximum vertical array dimension ARCTIC use artic overlap, used for the global domain only. SYM4 provides check for 4-fold symmetry in domain, used for testing only. SYM8 provides check for 8-fold symmetry in domain, used for testing only. Other macros defined in MACROS.h are not normally modified. See the readme file "README.macros" for more info. && Interactive printing and plotting. Both ncom_setup and ncom1 are set up to be able to do interactive printing and plotting of fields on NRL-SSC's Sun workstations. On other computers, the plotting routines needed may not be available, though fields can still be printed so that they can be inspected. The plotting routines that are used are NCAR graphics routines. With a small amount of work, NCAR graphics available on other computers could probably be made to work, or with a bit more work, other graphics software could be substituted for the NCAR graphics calls. The linking of the plotting software is handled by the environmental variable PLOTFILES that is set in the computer configuration files on subdirectory config. In the configuration file for our NRL Sun Ultra 2 Workstations (sunU2_one), PLOTFILES is set to several files used to do NCAR plotting. In the configuration files for other computers, PLOTFILES is set to ../../lib/libpdum_$(TYPE).a to link in some "dummy" plotting routines that are in file libsrc/ncom/libpdum.F. These routines allow printing out model fields, but not plotting. && Doing a model restart: Restarts should be seemless, i.e., results after a restart should be bit- for-bit identical to results obtained without a restart. Note that because of the leapfrog time-stepping scheme used by the model, the seemless restart uses model fields saved in the restart file at two successive timesteps. There was previously a restriction that the model timestep could not be changed on a restart. One reason was that the model elapsed time, which is computed as (number of iterations since start of run)*(timestep), would be incorrect. This restriction has been lifted by putting a recalculation of the number of iterations "iter" just after the call to read the restart file in subroutine omodel. iter is redefined according to the new timestep as iter = times/dti. Note, however, that changing the timestep will apply a slight "jolt" to the model when it restarts because the two time levels of fields that are saved for the restart are separated by a different timestep than the one now being used. If the change in the timestep is fairly small (? we have not experimented much with this), the jolt to the model should not be too bad and the solution should soon adjust to the new timestep. Obviously, if the timestep is changed the restart will not be seemless. && Checking for bit-for-bit reproducability of model results. Bit-for-bit agreement is the best check for a number of aspects of running the model, e.g., including adding halos, running on multiple cpu's, shrinkwrapping, and restarts. An easy way to check the bit-for-bit reproducability of the model results is to look at the residual printed out by the free-surface solver (e.g., for cgssor the residual is "rrtol"). This residual is normally printed out by the model every timestep. This residual is very sensitive to any changes in the model results. If it is identical between two model runs after a few timesteps, the model results are probably identical. && Using 4th-order finite differences. We have been looking at using 4th-order finite differences to try to improve the numerical accuracy of the model and reduce noise and advective overshoots. However, there are some strong cautions regarding the use of the higher-order differences that are being tried in ncom, which are discussed below. Note especially that higher-order terms have not been found to work well with sigma coordinates. The higher-order differences that have been tried are (i) 3rd-order-upwind leapfrog advection (UPW3) for both momentum and scalars in both the horizontal and vertical directions, (ii) 4th-order horizontal interpolations for the Coriolis terms, and (iii) 4th-order interpolations and differences for the horizontal baroclinic pressure gradient. In all cases, the higher-order terms are reduced to 2nd order next to land-sea boundaries and open boundaries. The 3rd-order-upwind (UPW3) leapfrog advection scheme was discussed by Holland et al. (1998, J. of Climate, V11, p1487). They applied this scheme to the NCAR Ocean Model (z-level). Some form of 3rd-order upwind advection is also an option in ROMS (which is sigma-coordinate), but I haven't seen any details of the implementation. The UPW3 scheme discussed by Holland et al. and used in NCOM consists of 4th-order interpolation of the model variable being advected to the grid cell boundaries (centered in time), plus a biharmonic mixing term (lagged in time) scaled by the absolute value of the advection velocity. The scaling of the biharmonic mixing term is such that the downwind part of the advection and biharmonic mixing terms ~ cancel which ~ gives a 3rd-order upwind advection term. The cancellation isn't perfect since the advection and biharmonic mixing terms terms are at different time levels (Holland et al. reported that calculating both terms at the central time level seemed to work OK (?), but time-centering the mixing term is inherently unstable). The UPW3 scheme can also be thought of as just what it explicitly is, i.e., 4th-order interpolation for the advection term plus a biharmonic mixing term scaled by the advection velocity. The performance of this scheme in idealized advection tests is quite good, i.e., low noise, low overshoots, and fairly low diffusion/dissipation of fronts; and it is easy and cheap to implement. Near boundaries, the UPW3 advection scheme in NCOM is reduced to 2nd-order advection plus Laplacian mixing scaled by the absolute value of the advection velocity to yield a grid-cell Re Number of 6. This is a relatively low (i.e., diffusive) grid-cell Re, but it acts to keep the flow smooth near boundaries where the 2nd-order terms are likely to generate noise. For efficiency, none of high-order interpolations in NCOM account for spacial changes in grid spacing. In the horizontal, our policy in setting up NCOM has been to keep grid stretching in the horizontal minimal, i.e., just what is needed to adapt to various map projections. Hence, changes in grid spacing locally are generally very small and interpolation errors should be very small. Grid stretching used in the vertical tends to be larger (10% - 20%), and hence, interpolation errors will tend to be larger. A consideration here is that ocean model fields (T, S, and current profiles) tend to have increasing vertical scales with depth, so that direct interpolation on a grid that increases with depth can be more accurate than interpolation that assumes more linear variation. Pros of the higher-order schemes: The higher-order terms significantly reduce noise and advective overshoots, increase the smoothness of the flow, and can increase the number and strength of eddies where the eddies are not well resolved. The cost of the high-order differences that are being tried in NCOM is fairly small, i.e., a very small increase in memory requirements (for wider halos) and about a 10-15% increase in cpu time for the full set of higher- order terms. On multi-processor computers, the use of higher-order terms requires a double-wide halo and some extra inter-processor communication is needed to set double-row halos instead of single row halos for some of the fields. This will probably increase inter-processor communication costs about 60%. (?) Cons of the higher-order schemes: The old, 2nd-order differences had some nice conservation properties that the higher-order schemes lack. The argument for the higher-order schemes is that so much dissipation is required to reduce the noise of the 2nd-order schemes that their nice conservation properties tend to go for naught. However, there is concern for the conservation properties of the higher-order schemes and potential spurious sources/sinks of energy. Some users of high-order schemes have reported eddies spinning up to unrealistic levels or otherwise unrealistic eddy formation (part of this problem may be the use of advection schemes that tend to strengthen gradients rather than weaken them, which should not be a problem with the UPW3 scheme). A potential fix for this is to implement a certain level of Laplacian mixing for the momentum equations to dissipate some energy (or reduce gradients) to keep the flow in check. The higher-order terms have shown spurious effects in sigma coordinate simulations with ncom in regions of steep slopes. For example, sigma- coordinate simulations of the N. Pacific at 1/4 deg with 4th-order terms show circulation features in the SSH tied to topographic features as well as other spurious circulation features. These features were not seen in sigma coordinate 2nd-order sims, z-level sims, or altimeter SSH analyses. This problem was not unexpected and is likely due to the fact that the larger stencils of the 4th-order interpolations along sigma layers bridge larger distances in the vertical than the 2nd-order terms in areas of steep slopes. Hence, implementation of higher-order terms with sigma coordinates probably needs a more sophisticated approach than what has been tried in ncom to date. Alt D. S. Ko suggested subtracting off the mean (climate) field for the 4th-order correction for horizontal scalar advection when using sigma coordinates. NCOM currently provides for subtracting off the mean field for both the 4th-order correction to the advection term and for the biharmonic mixing term when calculating UPW3 advection of scalars (set sigdif=T). This seems to help reduce the problems of the high-order advection of scalar fields considerably, but also allows for possible new spurious effects when the scalar field deviates significantly from the "mean" state. No fixes have yet been found for problems with the other 4th-order terms with sigma coordinates in steep slope regions. Hence, the conservative approach to using high-order differences with ncom is to use mainly z-levels and to include a Laplacian mixing term in the momentum equations. The higher-order terms might work satisfactorily with sigma coordinates if the bottom slopes are well resolved. The use of higher-order finite differences has a lot of potential benefits and is an area of active interest. Hence, any input on this topic would be welcome. && Using nested grids. NCOM's code structure was designed to facilitate use of nesting, i.e., with almost all model variables passed though subroutine argument lists so that the same code can be used for different grids/nests. However, in spite of this, NCOM's support for nesting to date has been limited, i.e., (i) nesting only within sigma coordinates, (ii) 1-way nesting only, (iii) nesting on single processors only, and (iv) nesting with identical vertical grids only. Recent changes now allow (a) nesting with sigma or z-level or mixed vertical coordinates, although the vertical layers/levels must still match, and (b) feedback of the T and S fields from the fine mesh grid (FM) to the coarse mesh (CM) grid. Nesting currently can only be used on a single processor, but nesting on multi-processors will soon be implemented. The nesting methodology used in NCOM is that each nest/grid has its own separate set of input and output files. This is not always the most convenient method, but is simple and very flexible, i.e., parameters and forcing can be specified differently for different grids/nests, and the input and output files for the main grid and the nests all have the same format. The setup program has been modified to make setting up nests easier. The main grid is set up by a call to subroutine "mainset" which generates all the input files needed for the main grid. Then subroutine "nestset" is called for each nested grid. Subroutine nestset can read the input files that have been generated for the "coarse mesh" (CM) that the "fine mesh" (FM) is embeded in to allow interpolation of the FM fields from the values on the CM grid. Alternatively, fields can be interpolated to the FM from another source, e.g., the original data interpolated to the CM or another data source. The bathymetry for the FM can be made to be fully consistent with that on the CM, or can be refined in the interior of the FM, e.g., interpolated from the original data base or a higher-resolution data base. Note that the bathymetry for the FM must be consistent with that on the CM near the open boundary of the FM where boundary conditions are taken form the CM. Subroutine nestset provides for this by "blending" a bathymetry consistent with the CM bathymetry with a refined bathymetry so that the FM bathymetry is consistent with the CM bathymetry near its open bndy. Integration of the main grid and the nests, interpolation of bc for the FM from the CM, and feedback of the FM T and S fields to the CM is handled automatically during the model run by NCOM. Note that, theoretically, any number of nests can be used, nests can be embeded within other nests, and any number of nests can be embeded within a given grid. The parameter file for each nest (OPARM_*.D) contains the number of the grid that the nest is embeded in, the location of the nest within that grid, and the nesting ratio. There is no explicit restriction on the grid-nesting ratio or time-step ratio between the FM and CM, and the grid-nesting ratio and time-step ratio do not have to be the same. When using feedback, however, grid-nesting ratios should probably be limited to about 3:1 so that features on the FM can be supported on the CM to some extent. As noted above, all the grids/nests use similar sets of input and output files. Of course, the nested grids don't need input data for open bc since these will be obtained from the grid in which they are nested. However, specified barotropic tidal forcing can be applied to the open boundary of a nest. This tidal forcing will be superimposed on the boundary conditions taken from the CM. This allows tidal forcing to be used on a nested grid when it is not used on the CM, or to add additional tidal constituents on the nest that are not being forced on the CM. && Horizontal and vertical grid layout and indexing. The model uses a staggered "C" type grid with elevation and scalar (T and S) fields defined at grid-cell centers and velocities defined at the center of the grid-cell faces. Turbulence fields for the MYL2.5 mixing model and the vertical eddy coefficients are defined at grid cell centers in the horizontal but at the cell boundaries in the vertical. This is the same staggered grid as is used in POM. The indexing scheme is that the top and the SW corner of a grid cell have same indexes as the grid-cell center. This is the same convention as used in POM, but is different from that used by NLOM which assigns the NE corner of a grid cell the same indexes as the center. The vertical indexing starts at the surface and proceeds downward. Hence, the surface and the center of the upper layer have vertical index k=1. The index of the bottom interface of the sigma grid is k=ls and the index of the bottom interface of the model grid is k=l. The index of the center of the bottom sigma layer is k=ls-1 and the index of the center of the bottom layer is k=l-1. Disregarding halos, horizontal indexes run from 1 to n in x and from 1 to m in y. At an exterior boundary (i.e., at the outside edge of the overall model domain), there is currently a single boundary row and the interior pts run from 2 to n-1 in x and from 2 to m-1 in y. However, this will be changed in the near future to accommodate a boundary row of arbitrary width at exterior boundaries and to provide more flexibility in setting the size of the model domain in MP applications were n and m may be constrained by the need for them to be divisible by the number of processors being used in the x and y dimensions, respectively. For interior boundaries, which occur at periodic boundaries or (if running on multiple processors) on the interior boundaries of the part of the overall model domain running on a given processor, the interior pts run from 1 to n in x and from 1 to m in y. Halos around the outside of this domain are used to hold values needed for updating the values within the domain. Halos are typically one or two pts wide. For a halo of width "nmh", arrays are horizontally dimensioned as a(nm0:n+nmh,nm0:m+nmh) where nm0=1-nmh. && List of main variables in model Fortran code: NCOM VARIABLES Units used within the model are mks (meters, kilograms, seconds). General prefix naming rules/conventions (mostly followed, but not 100%): -r appended to name to indicate reciprocal. - (no designation) indicates centered at t-pt. -m depth variable centered at grid cell mid-pt. -u indicates centered at u-pt -v indicates centered at v-pt -w indicates centered at w-pt -x indicates x-direction. -y indicates y-direction. -z indicates z-direction. Main input dimensions: Note that these may have an "o" prefixed to them in some of the initial routines to try to distinguish them from the atmospheric model variables in COAMPS when the models are coupled. n,m horizontal grid dimensions in x and y. If the simulation is to be run in a multi-processor environment, n and m need to be evenly divisible by the x and y dimensions of the processor array to be used so that the domain can be subdivided into uniformly sized subdomains (tiles). In general, it's good if n and m are divisible by some moderately high power of 2; e.g., 16, 32, 64, etc., so that various sizes of processor arrays can be accommodated. l total vertical layers (or levels) + 1 ls total number of sigma layers + 1 nr number of scalar model prognostic variables. nq number of prognostic turbulence variables. ntyp number of solar extinction profile types (not much used at this point, but available to facilitate implementation of spacially variable solar extinction). ntc number of tidal constituents being forced at open bndy. nobmax max number of open bndy pts. nrivo maximum number of rivers. Input parameters: (see list of input parameters in file OPARM_n.D above): Defined constants: pi 3.1415926535 raddeg pi/180. degrad 180./pi small a small number = 1.0e-8 Calculated constants: amsk(n,m,l) land-sea mask at t-pts. umsk(n,m,l) land-sea mask at u-pts. vmsk(n,m,l) land-sea mask at v-pts. cbu(n,m) coefficient of bottom friction at u pt. cbv(n,m) coefficient of bottom friction at v pt. Input values for vertical grid: zw(l) static depth at w-pts on the z-level grid (defined positive upward, i.e., values below z=0 are negative). These are used to calculate fractional depths on the sigma coordinate grid. Input values for horizontal grid: h(n,m) static depth at t-pt relative to reference depth at z=0. bottom depths below z=0 are negative, values above are positive. z=0 is ~ the position of the equilibrium sea surface. elon(n,m) longitude at t-pt (deg E) alat(n,m) latitude at t-pt (deg N) ang(n,m) angle between local latitude line and x-axis at t-pt. for counterclockwise rotation of grid wrt lat-long, ang > 0. dx(n,m) grid spacing in x at t-pt (+). dy(n,m) grid spacing in y at t-pt (+). Calculated grid-related constants: dxu(n,m) grid spacing in x at u-pt. dyu(n,m) grid spacing in y at u-pt. dxv(n,m) grid spacing in x at v-pt. dyv(n,m) grid spacing in y at v-pt. dxr(n,m) 1/dx dyr(n,m) 1/dy dxur(n,m) 1/dxu dyur(n,m) 1/dyu dxvr(n,m) 1/dxv dyvr(n,m) 1/dyv da(n,m) horizontal area of grid cell at t-pt (dx*dy) dau(n,m) horizontal area of grid cell at u-pt dav(n,m) horizontal area of grid cell at v-pt dar(n,m) 1/da(n,m) daur(n,m) 1/dau(n,m) davr(n,m) 1/dav(n,m) hu(n,m) static depth at u-pt (depths below z=0 are neg). hv(n,m) static depth at v-pt (depths below z=0 are neg). h1(n,m) static depth to bottom of sigma levels at t-pt (depths below z=0 are neg). h1u(n,m) static depth to bottom of sigma levels at u-pt. Don't need?? h1v(n,m) static depth to bottom of sigma levels at v-pt. sw(l) fractional sigma depth at w-pt (-). sm(l) fractional sigma depth at t-pt (-). dsw(l) fractional sigma grid spacing at w-pt (+). dsm(l) fractional sigma grid spacing at t-pt (+). dswr(l) 1/dsw dsmr(l) 1/dsm zw(l) static depth at w-pt for z-levels (values below z=0 are neg). zm(lm1) static depth at t-pt for z-levels (values below z=0 are neg). dzw(l) vertical grid spacing at w-pt (+). dzm(lm1) vertical grid spacing at t-pt (+). dzwr(l) 1/dzw dzmr(l) 1/dzm fda(n,m) "modified" Coriolis parameter, defined at t-pts (= f*da*0.25). Time variables: iter temporal iteration number. On a restart, the model starts where it left off. iterx iteration number for barotropic mode (not currently used). times elapsed time in seconds since the start of the run. timed elapsed time in days. Grid index variables: i1,i2,i3 temporal indices for 3 saved baroclinic time levels. ib1,ib2,ib3 temporal indices for 3 saved barotropic time levels (not used). j1,j2 temporal indices for 2 saved baroclinic time levels. kb(n,m) index of bottom layer at t-pt. kbu(n,m) index of bottom layer at u-pt. kbv(n,m) index of bottom layer at v-pt. is(m),ie(m) i-loop start and stop indices for shrinkwrapping. ism(m),iem(m) i-loop start and stop indices at v points (minimum). isp(m),iep(m) i-loop start and stop indices at v points (maximum). js,je j-loop start and stop indices. ke(m) max value of kb in an i-row. iec(8) 1st 4 values denoting exterior (=1) or interior (=0) tile edge for W,E,S,N sides (needed when running MP). Values 5 to 8 are set to one minus the values for 1 to 4. ifx1,ifx2 temporal indices for surface fluxes from input file. iat1,iat2 temporal indices for surface fluxes from coupled atmospheric model. iss1,iss2 temporal indices for specified SST and SSS. iob1,iob2 temporal indices for open boundary data. irv1,irv2 temporal indices for river inflow data. i,j,k indices used when do-looping in x, y, and z. ir index used when do-looping through different scalar fields. iq index used when do-looping through different turbulence fields. Grid-related variables: d(n,m,3) total depth at e-pt (e - h, >=0). du(n,m,3) total depth at u-pt. dv(n,m,3) total depth at v-pt. d1(n,m,3) total depth to bottom of sigma layers at e-pt (e - h1, >=0). d1u(n,m,3) total depth to bottom of sigma layers at u-pt. d1v(n,m,3) total depth to bottom of sigma layers at v-pt. Main prognostic variables: e(n,m,3) surface elevation udb(n,m,3) barotropic transport (ub*d) at u-pt. vdb(n,m,3) barotropic transport (vb*d) at v-pt. u(n,m,lm1,3) velocity in x at u-pt. v(n,m,lm1,3) velocity in y at v-pt. r(n,m,lm1,2,nr) scalar variables (t, s, ...) at t-pt. q(n,m,l,2,3) tke and tke*(turbulent length scale) at w-pt. e2(n,m,3) depth-averaged e at u-pt for explicit barotropic calc. ub2(n,m,3) depth-averaged u at u-pt for explicit barotropic calc. vb2(n,m,3) depth-averaged v at v-pt for explicit barotropic calc. (e2, ub2, and vb2 are not currently used) other variables: dti2 timestep for leapfrog (usually 2*dti, but may be dti on 1st iteration). ramp current value of ramp for gradual spinup of forcing, bc, etc ub(n,m) depth-averaged (barotropic) velocity in x at u-pt. vb(n,m) depth-averaged (barotropic) velocity in y at v-pt. w(n,m,l) velocity in z at w-pt (+ upwards). rho(n,m,lm1) in-situ density minus reference density rho0. sos(n,m,lm1) speed of sound. Used to calculate stability with Mellor's eqn state if density includes effect of pressure. sor(n,m,lm1) source volume flux at each grid pt (m3/s). sorb(n,m) vertical integral of sor. rmean(n,m,ls-1,nr+1) climate or mean values of scalar fields and horizontal mean values of density (density is stored at ir=nr+1). fu(n,m) vertically integrated forcing for barotropic u velocity. fv(n,m) vertically integrated forcing for barotropic v velocity. aax(n,m) coefficient used for implicit free surface solver. aay(n,m) coefficient used for implicit free surface solver. xk(n,m,lm1) (horizontal viscosity or diffusivity in x at u-pt)*dyx*dzm yk(n,m,lm1) (horizontal viscosity or diffusivity in y at v-pt)*dxy*dzm zkm(n,m,l) vertical turbulent viscosity at w-pt. zkh(n,m,l) vertical turbulent diffusivity at w-pt. ext(n,m,l) solar extinction profiles at each horizontal pt, defined at w-pts. tl(n,m,l) turbulence length scale, defined at w-pt. wubot(n,m) bottom stress at u-pt. wvbot(n,m) bottom stress at v-pt. Surface forcing variables: patm(n,m) surface atmospheric pressure (m). usflx(n,m) surface wind stress in x at e-pt (m2/s2). vsflx(n,m) surface wind stress in y at e-pt (m2/s2). rsflx(n,m,nr) surface fluxes for scalar variables at e-pt (units-m/s). solar(n,m) solar flux penetrating surface at e-pt (degC-m/s). surruf(n,m) surface roughness (e.g., from waves) (m). patm2(n,m,2) surface atmospheric pressure (m) stored at 2 times. usflx2(n,m,2) surface wind stress in x at e-pt stored at 2 times. vsflx2(n,m,2) surface wind stress in y at e-pt stored at 2 times. rsflx2(n,m,nr,2) surface fluxes for scalar variables at e-pt at 2 times. solar2(n,m,2) solar or cloud data e-pt at 2 times. Open boundary variables: nob total number of open boundary pts. neob(2,4) index limits for elev pts along each (W E S N) bndy. nuob(2,4) index limits for normal velocity pts along each bndy. nvob(2,4) index limits for tangent velocity pts along each bndy. iob(nob) grid index of boundary pt at e pt in x direction. job(nob) grid index of boundary pt at e pt in y direction. iobi(nob) grid index of interior pt adjoining the boundary pt in x. jobi(nob) grid index of interior pt adjoining the boundary pt in y. ivob(nob) grid index of bndy pt at tangent velocity pt in x. jvob(nob) grid index of bndy pt at tangent velocity pt in y. eob(nob,2) surface elevation at boundary (at two times). ubob(nob,2) normal transport at boundary. vbob(nob,2) tangent transport at boundary. uob(l-1,nob,2) baroclinic normal velocity at bndy. vob(l-1,nob,2) baroclinic tangent velocity at bndy. cgwb(nob,2) external and internal (1st mode) gravity wave speed at bndy. etab(ntc,nob) tidal elevation amplitude at boundary (for each constituent). etpb(ntc,nob) tidal phase at boundary (in radians). utab(ntc,nob) amplitude of tidal normal velocity at boundary. utpb(ntc,nob) phase of tidal normal velocity at boundary (radians). vtab(ntc,nob) amplitude of tidal tangential velocity at boundary. vtpb(ntc,nob) phase of tidal tangent velocity at boundary (radians). Some of the temporary variables: ua(n,lm1) transport in x at u-pt (0.5*u*dyu*dz, for advection). va(n,lm1) transport in y at v-pt (0.5*v*dxv*dz). wa(n,l) transport in z at w-pt (0.5*w*dx*dy). xk(n,m,lm1) (mixing coefficient in x direction)*dyu*dz yk(n,m,lm1) (mixing coefficient in y direction)*dxv*dz && References. Mellor, G. L., An equation of state for numerical models of oceans and estuaries, J. Atmos. Oceanic. Tech., 8, 609-611, 1991. Mellor, G. L., and T. Yamada, A hierarchy of turbulence closure models for planetary boundary layers, J. Atmos. Sci., 31, 1791-1806, 1974. && Acronyms used. d : day e-pt : elevation grid point lm1 : l-1 this is the total number of vertical layers or levels. m : meter pt : point t-pt : temperature grid point u-pt : u-component of current grid point v-pt : v-component of current grid point