Amber is a molecular dynamics program. Local support is not available. The version on the clusters generally lags behind the current release.
ANSYS is a finite-element-based, general-purpose solver, mostly used for engineering applications. It is available for general use for teaching applications. Researchers must purchase a separate license. Local support is minimal; users should make an account at the ANSYS web page to get technical support directly from the vendor.
To direct your jobs to your research group's license server, first you must edit a configuration file license_preferences.xml to indicate what your group has purchased. Create a ~/.ansys/<version>/licensing directory. The <version> can be obtained from the module version. Download the sample preferences file, edit it appropriately, and store it in your licensing directory.
You need only do this once, but each script you run must include lines to set an environment variable for the license server.
NCBI BLAST is a suite of programs for genetic analysis. To use the programs on the clusters, load the ncbi-blast module in your SLURM job script. Users must maintain their own search databases.
Gaussian is a molecular dynamics program. Local support is not available — please see the Gaussian web site for help. The version on the clusters generally lags behind the current release. In order to run Gaussian on the clusters, a user must be in the Gaussian group. To request that you be added to the group, please use our contact form.
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. Local support is not available—please see the GROMACS web site for help.
The IMSL libraries are a comprehensive set of mathematical and statistical functions that programmers can embed into their software applications. IMSL provides high-performance computing software and expertise needed to develop and execute sophisticated numerical analysis applications. These libraries free you from developing your own code by providing pre-written mathematical and statistical algorithms that you can embed into your C/C++ and Fortran applications.
The numerical algorithms of the IMSL Fortran Library can be accessed using Fortran 77 or using Fortran 90 language constructs. Some of the Fortran 90 implementation of IMSL routines let users take advantage of parallel computing through the library's underlying use of the Message Passing Interface (MPI) libraries if their environment supports it (e.g. the HPC cluster). We also license the IMSL C library CIMSL. The library uses the Intel icc or icpc compilers. The routines can be called from either C or C++. It does not have MPI capability built in, but does work with OpenMP, and some of the C implementations of IMSL routines can take advantage of threaded parallel computing through the library's underlying use of the Posix threads (pthreads) libraries.
Compiling and linking Fortran library applications. Before using the IMSL Libraries you must define certain environment variables. On the HPC clusters, you should use a modules command to set up the IMSL environment. The command to do this is the following: module add imsl or module load imsl.
The modules function (or the cttsetup.* scripts) set many environment variables and shell aliases/functions. The following is a list of what is useful to the Fortran IMSL Library user. Several other variables are set that are used internally by IMSL products. Environment variables for the C libraries are similar.
|$F90||Fortran 90 compiler|
|$MPIF90||MPI Fortran 90|
Fortran 90 compiler options
|$FC||Fortran 77 compiler|
|$FFLAGS||Fortran 77 compiler options|
|$LINK_F90_SHARED||Link options required to link with the shared Fortran 90 MP library (does not require MPI library, uses scalar error handler)|
|$LINK_F90||By default set to $LINK_F90_SHARED|
|$LINK_FNL_SHARED||Link options required to link with the shared Fortran Numerical Libraries (does not require MPI library, uses scalar error handler)|
|$LINK_FNL||By default, set to $LINK_FNL_SHARED|
|$LINK_MPI||Link options required to link with the static Fortran 90 MP Library (requires MPI library). This LINK environment variable uses the parallel IMSL error handler. The parallel IMSL error handler is designed to behave correctly in an MPI environment.|
|$VNI_LICENSE_NUMBER||Contains your license number.|
Note: The F90FLAGS or FFLAGS variables do not include any optimization or debugging options. Use the normal compiler flags as you would for any program. For Fortran 90 applications that do not use subroutines using MPI, the following command will compile and link an application program: imsl_prog.f: $F90 -o imsl_prog $F90FLAGS imsl_prog.f90 $LINK_F90. For Fortran 90 applications (use subroutines using MPI), the following command will compile and link an application program: imsl_prog.f: $MPIF90 -o imsl_prog $F90FLAGS imsl_prog.f90 $LINK_MPI. To use flags more approprate for Fortran 77, the following command will compile and link an application program imsl_prog.f: $FC -o imsl_prog $FFLAGS imsl_prog.f $LINK_FNL. Note that in many cases, $FC is still the Fortran 90/95 compiler with appropriate options for fixed-form source.
Compiling and linking C/C++ applications. On the HPC clusters, use the modules command to set up the IMSL environment as follows: module add cimsl or module load cimsl
The modules function sets many environment variables and shell aliases/functions. The following is a list of what is useful to the CIMSL Library user. Several other variables are set that are used internally by the IMSL products.
Link the C libraries
|$LINK_CNL_SMP||Link the threaded C libraries|
|$VNI_LICENSE_NUMBER||Contains your license number|
The CFLAGS variable does not include any optimization or debugging options. The script also does not set the C++ compiler. On Linux, use icpc. The following command will compile and link an application program: imsl_prog.c: $CC -o imsl_prog $CFLAGS -O imsl_prog.c $LINK_CNL. For C++ use the explicit name of the correct compiler: icpc -o imsl_prog $CFLAGS -O imsl_prog.c $LINK_CNL.
Using Makefiles. Compiling and linking codes with IMSL requires the addition of include paths for compilation and libraries for linking. This task can be greatly simplified by use of the Unix make command. A specific example of a Makefile that can compile and link an IMSL example program, imslmp.f, is shown below:
FCFLAGS = $(FFLAGS) -O LIBS = $(LINK_FNL) LDR = $(FC) LDFLAGS = OBJS = imslmp.o .SUFFIXES: .o .f .f.o: $(FC) -c $(FCFLAGS) $< imslmp: $(OBJS) $(LDR) $(LDFLAGS) -o imslmp $(OBJS) $(LIBS)
This Makefile requires that the IMSL module be loaded. The make program is aware of environment variables that have been set in the shell from which it is invoked.
Finding the Right Routine. The modules command will set an environment variable that will enable you to find examples to use as templates for writing your own programs. For instance, the directory $FNL_EXAMPLES/manual contains the examples documented in the IMSL Fortran Library User's Guides. Refer to the README file located in the manual directory for details on how to run these examples. Users interested in the IMSL Library's parallel capability should look through the directory $FNL_EXAMPLES/mpi_manual, which contains the MPI examples documented in the IMSL Fortran Library User's Guide. These examples make use of the subroutines which can take advantage of MPI. Refer to the README file located in the mpi_manual directory for details on how to run these examples. Similarly, examples for the C library can be found in $CNL_EXAMPLES. The most useful examples are in $CNL_EXAMPLES/validate.
IMSL documentation can be found here.
LAMMPS is a classical molecular dynamics code and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. It runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
SAS is a statistical analysis program. SAS scripts may be run on the clusters through the SLURM queueing system in batch mode, but production interactive jobs on the frontend are not permitted.
Stata is a statistical analysis program. The cluster version of Stata is limited.