Difference between revisions of "Useful Makefiles"
import>Jss43 m (→Tardis) |
import>Jss43 (→Tardis) |
||
Line 244: | Line 244: | ||
Having to set the c compiler for parallel compilation is not mentioned in the docs... |
Having to set the c compiler for parallel compilation is not mentioned in the docs... |
||
This should also work with openMPI. |
This should also work with openMPI. |
||
+ | |||
+ | === BLACS and scaLAPACK with openMPI === |
||
+ | |||
+ | BLACS and scaLAPACK are the parallel equivalents of the BLAS and LAPACK libraries. Note that they require the BLAS and LAPACK libraries. |
||
+ | |||
+ | I didn't have any luck using the BLACS and scaLAPACK provided on tardis (either the intel modules or via MKL), so compiled my own. The BLACS and scaLAPACK documentation on using openMPI is woeful. Fortunately the openMPI people are lovely and tell us how to do it in their [http://www.open-mpi.org/faq/?category=mpi-apps#blacs FAQ]. |
||
+ | |||
+ | For BLACS: |
||
+ | |||
+ | * Download [http://www.netlib.org/blacs/mpiblacs.tgz mpiblacs.tgz] and [http://www.netlib.org/blacs/mpiblacs-patch03.tg mpiblacs-patch03.tgz] and extract. Extract the patch second, to apply it. |
||
+ | * copy the relevant template make include file to the BLACS home directory: |
||
+ | cp BMAKES/BMake.MPI-LINUX BMake.inc |
||
+ | * Edit Bmake.inc according to the [http://www.open-mpi.org/faq/?category=mpi-apps#blacs openMPI FAQ] (except I used mpif90 rather than mpif77). |
||
+ | * Set INTFACE=-DADD_ (this is crucial if you ever want to link it to something!). |
||
+ | * compile the libraries: |
||
+ | make mpi |
||
+ | * compile the tests: |
||
+ | make tester |
||
+ | |||
+ | The libraries reside in BLACS/LIB/*.a and the test executables in BLACS/TESTING/EXE/x*. The tests need to be run using mpirun. All the tests pass (note that the final test is an abort call, which gives a stack trace---this is the correct behaviour and is not an error). |
||
+ | |||
+ | For scaLAPACK: |
||
+ | |||
+ | * Download [http://www.netlib.org/scalapack/scalapack.tgz scalapack.tgz] and extract. |
||
+ | * Copy the SLmake.inc.example to SLmake.inc and edit according to the [http://www.open-mpi.org/faq/?category=mpi-apps#scalapack openMPI FAQ] (again, I used mpif90 rather than mpif77). |
||
+ | * Specify the locations of your BLAS and LAPACK libraries. |
||
+ | * compile |
||
+ | make |
||
+ | * You can also make tests: |
||
+ | make exe |
||
+ | |||
+ | The library is libscalapack.a in the scaLAPACK home directory and the tests are in the TESTING subdirectory. The tests either pass (or warn about input values) apart from xcqr (which tests the single-precision complex QR factorisation routines: not something I worry about). |
||
== mek-quake == |
== mek-quake == |
Revision as of 12:20, 24 July 2008
This page is a place to stick Makefiles or other config files for common codes that you've got working on the local compute servers
Tardis
cp2k with intel compilers
Setting the FORT_C_NAME variable to 'intel' helps cp2k's build system select the right compiler.
You need the mkl, mpi/mvapich/intel, blacs/mvapich/intel64, and scalapack/intel modules loaded
# The following settings worked for: # - AMD64 Opteron cluster # - SUSE Linux 10.0 (x86_64) # - Intel(R) Fortran Compiler for Intel(R) EM64T-based applications, Version 9.1.037 # - Intel(R) Cluster Math Kernel Library v7.2 for Linux # - MVAPICH # - BLACS and ScaLAPACK compiled for Intel # PERL = perl CC = cc CPP = cpp FC = mpif90 -FR LD = mpif90 AR = ar -r DFLAGS = -D__INTEL -D__FFTSG\ -D__parallel -D__BLACS -D__SCALAPACK\ -Dfftwnd_f77=fftwnd_f77_\ -Dfftwnd_f77_one=fftwnd_f77_one_\ -Dfftw3d_f77_create_plan=fftw3d_f77_create_plan_\ -Dfftw2d_f77_create_plan=fftw2d_f77_create_plan_\ -Dfftwnd_f77_destroy_plan=fftwnd_f77_destroy_plan_\ -Dfftw_f77_create_plan=fftw_f77_create_plan_\ -Dfftw_f77=fftw_f77_\ -Dfftw_f77_destroy_plan=fftw_f77_destroy_plan_ CPPFLAGS = -traditional -C $(DFLAGS) -P FCFLAGS = $(DFLAGS) -O2 MKLPATH = /usr/local/Cluster-Apps/intel/mkl/8.0/lib/em64t LDFLAGS = $(FCFLAGS) -i-static LIBS = \ -L/usr/local/Cluster-Apps/scalapack/intel/lib64 -lscalapack \ $(MKLPATH)/libmkl_lapack.a \ -L/usr/local/Cluster-Apps/blacs/mvapich/intel/lib64 -lblacsF77init -lblacs \ $(MKLPATH)/libmkl_em64t.a \ $(MKLPATH)/libguide.a\ -lpthread OBJECTS_ARCHITECTURE = machine_intel.o # -D__FFTW\
Jochen's CPMD with Portland compilers and MVAPICH
You need the mpi/mvapich/pgi module but ACML comes in automatically with PGI.
#---------------------------------------------------------------------------- # Makefile for cpmd.x (plane wave electronic calculation) # Configuration: PGI-AMD64-MPI # Creation of Makefile: Dec 5 2006 # on Linux tardis 2.6.15.1-clustervision-128_cvos #1 SMP Mon Sep 25 12:05:46 CEST 2006 x86_64 x86_64 x86_64 GNU/Linux # Author: jb376 #---------------------------------------------------------------------------- # SHELL = /bin/sh # #--------------- Default Configuration for PGI-AMD64-MPI --------------- SRC = . DEST = . BIN = . #QMMM_FLAGS = -D__QMECHCOUPL #QMMM_LIBS = -L. -lmm FFLAGS = -r8 -pc=64 -Msignextend #LFLAGS = -Bstatic -L. -latlas_x86-64 $(QMMM_LIBS) #LFLAGS = -Bstatic -L. -latlas_x86_64 $(QMMM_LIBS) LFLAGS = -lacml $(QMMM_LIBS) CFLAGS = CPP = /lib/cpp -P -C -traditional #CPPFLAGS = -D__Linux -D__PGI -DLAPACK -DFFT_DEFAULT -DPOINTER8 -D__pgf90 \ # -DPARALLEL -DMP_LIBRARY=__MPI -DMYRINET CPPFLAGS = -D__Linux -D__PGI -DLAPACK -DFFT_DEFAULT -DPOINTER8 -D__pgf90 \ -DPARALLEL -DMP_LIBRARY=__MPI NOOPT_FLAG = CC = mpicc -O2 -Wall -m64 FC = mpif77 -c -fastsse -tp k8-64 LD = mpif77 -fastsse -tp k8-64 AR = #----------------------------------------------------------------------------
I had problems compiling CPMD v3.11.1 using mpif77: the compiler complained about some valid Fortran statements (e.g. append and cycle). Using mpif90 instead resolved this.--james 11:56, 8 August 2007 (BST)
The recent upgrade to tardis has changed how some modules work. In particular, mpicc now points to pgcc rather than gcc if the portland environment module is loaded. This is, Catherine and I think, the sane approach. The above CC options cause make to barf, as pgcc uses different flags to gcc. Change "-O2 -Wall -m64" to "-O2 -Minform=inform -pc=64" to give pgcc the equivalent options.--james 19:37, 7 March 2008 (GMT)
NAMD2 with Intel compilers
This one requires the openmpi/intel64 module
Step 1: charm++
./build charm++ mpi-linux-amd64 icc ifort
my src/arch/mpi-linux-amd64/conv-mach.sh
CMK_REAL_COMPILER=`mpiCC -show 2>/dev/null | cut -d' ' -f1 ` case "$CMK_REAL_COMPILER" in g++) CMK_AMD64="-m64 -fPIC" ;; esac CMK_CPP_CHARM="/lib/cpp -P" CMK_CPP_C="mpicc -E" CMK_CC="mpicc $CMK_AMD64 " CMK_CXX="mpiCC $CMK_AMD64 " CMK_CXXPP="mpiCC -E $CMK_AMD64 " #CMK_SYSLIBS="-lmpich" # -lmpich is not needed as we replace 'icc' with 'mpicc' in the cc-icc.sh file CMK_SYSLIBS=" " CMK_LIBS="-lckqt $CMK_SYSLIBS " CMK_LD_LIBRARY_PATH="-Wl,-rpath,$CHARMLIBSO/" CMK_NATIVE_CC="gcc $CMK_AMD64 " CMK_NATIVE_LD="gcc $CMK_AMD64 " CMK_NATIVE_CXX="g++ $CMK_AMD64 " CMK_NATIVE_LDXX="g++ $CMK_AMD64 " CMK_NATIVE_LIBS="" # fortran compiler CMK_CF77="f77" CMK_CF90="f90" CMK_F90LIBS=" " CMK_F77LIBS=" " CMK_MOD_NAME_ALLCAPS=1 CMK_MOD_EXT="mod" CMK_F90_USE_MODDIR=1 CMK_F90_MODINC="-p" CMK_QT='generic64' CMK_RANLIB="ranlib"
and src/arch/common/cc-icc.sh
# Changed all the C/C++ compilers and linkers to the mpi compiler version s/icc/mpicc/ s/icpc/mpiCC/ CMK_CPP_C='mpicc -E ' CMK_CC="mpicc -fpic " CMK_CXX="mpiCC -fpic " CMK_CXXPP='mpiCC -E ' CMK_LD='mpicc -i_dynamic ' CMK_LDXX='mpiCC -i_dynamic ' CMK_LD_LIBRARY_PATH="-Wl,-rpath,$CHARMLIBSO/" # The F90 needed changing to ifort and -fPIC adding CMK_CF90='ifort -auto -fPIC ' CMK_CF90_FIXED="$CMK_CF90 -132 -FI " CMK_NATIVE_F90="$CMK_CF90" CMK_NATIVE_CC="$CMK_CC" CMK_NATIVE_CXX="$CMK_CXX" CMK_NATIVE_LD="$CMK_LD" CMK_NATIVE_LDXX="$CMK_LDXX" # I removed a bunch of bogus -L options pointing to an ancient and nonexistent ifc installation CMK_F90LIBS='-lintrins -lIEPCF90 -lPEPCF90 -lF90 -lintrins -limf ' CMK_MOD_NAME_ALLCAPS=1 CMK_MOD_EXT="mod" CMK_F90_USE_MODDIR=""
Since tardis's IB stack was updated I can't get the charm++ built with Intel compilers to pass all the tests anymore. One built with gcc seems to do fine though. Load the environment/64-bit/openmpi/gnu64 module. Unpack fresh charm++ source and cd to it.
Edit src/arch/mpi-linux-amd64/conv-mach.sh and change the
CMK_SYSLIBS="-lmpich"
to
CMK_SYSLIBS=" "
then
./build charm++ mpi-linux-amd64 --no-shared -O -DCMK_OPTIMIZE
The best test (according to the NAMD people) is to cd tests/charm++/megatest; build and run that one. This does pass.
Step 2: NAMD
Load the environment/64-bit/openmpi/intel64 module
./config tcl fftw Linux-amd64-MPI-icc cd Linux-amd64-MPI-icc
my arch/Linux-amd64.tcl:
TCLDIR=/usr TCLINCL=-I$(TCLDIR)/include TCLLIB=-L$(TCLDIR)/lib -ltcl8.4 -ldl TCLFLAGS=-DNAMD_TCL -DUSE_NON_CONST TCL=$(TCLINCL) $(TCLFLAGS)
my arch/Linux-amd64.fftw:
FFTDIR=/usr/local/fftw2/intel/64/2.1.5 FFTINCL=-I$(FFTDIR)/include -I$(HOME)/fftw/include FFTLIB=-L$(FFTDIR)/lib -L$(HOME)/fftw/lib -lsrfftw -lsfftw FFTFLAGS=-DNAMD_FFTW FFT=$(FFTINCL) $(FFTFLAGS)
my Make.charmm
CHARMBASE = /usr/local/charm++/charm-5.9-openmpi-gcc
my Linux-amd64-MPI-icc.arch which owes a great deal to Jochen's below
NAMD_ARCH = Linux-amd64 CHARMARCH = mpi-linux-amd64 FLOATOPTS = -ip -fno-rtti CXX = mpiCC CXXOPTS = -i-static -static-libcxa -O2 -unroll $(FLOATOPTS) CXXNOALIASOPTS = -O2 -unroll -fno-alias $(FLOATOPTS) CC = mpicc COPTS = -i-static -static-libcxa -O2 $(FLOATOPTS)
now
make
Test it by doing an interactive qsub and
mpirun ./namd2 src/alanin
Jochen's Linux-amd64-MPI-icc.arch (Vastly improves performance over the defaults):
NAMD_ARCH = Linux-amd64 # If using the gcc compiled charm++ you want to uncomment the line immediately below this # CHARMARCH = mpi-linux-amd64 # and comment out the one below this - CEN CHARMARCH = mpi-linux-amd64-icc FLOATOPTS = -fno-rtti CXX = /usr/local/Cluster-Apps/ofed/1.0/mpi/intel/mvapich-0.9.7-mlx2.1.0/bin/mpicxx # This is a little odd as -tpp6 is a Portland option - CEN CXXOPTS = -tpp6 -pc64 -i-static -static-libcxa -O2 -unroll $(FLOATOPTS) CXXNOALIASOPTS = -O2 -unroll -fno-alias $(FLOATOPTS) CC = /usr/local/Cluster-Apps/ofed/1.0/mpi/intel/mvapich-0.9.7-mlx2.1.0/bin/mpicc COPTS = -i-static -static-libcxa -O2 $(FLOATOPTS)
FFTW 2.1.5 with MVAPICH
This is not really well-documented elsewhere. FFTW 2.1.5 is used in (e.g.) CPMD as an alternative to the default FFT engine supplied. It is trivial to compile in serial (FFTW 3 is even easier, but sadly the parallel version is in alpha and incompatible with the widely-used FFTW2). On tardis, in fftw-2.1.5 directory formed by extracting the tarball:
- env CC=mpicc F77=mpif90 ./configure --prefix=`pwd` --enable-mpi --enable-sse
- make
- make install
Having to set the c compiler for parallel compilation is not mentioned in the docs... This should also work with openMPI.
BLACS and scaLAPACK with openMPI
BLACS and scaLAPACK are the parallel equivalents of the BLAS and LAPACK libraries. Note that they require the BLAS and LAPACK libraries.
I didn't have any luck using the BLACS and scaLAPACK provided on tardis (either the intel modules or via MKL), so compiled my own. The BLACS and scaLAPACK documentation on using openMPI is woeful. Fortunately the openMPI people are lovely and tell us how to do it in their FAQ.
For BLACS:
- Download mpiblacs.tgz and mpiblacs-patch03.tgz and extract. Extract the patch second, to apply it.
- copy the relevant template make include file to the BLACS home directory:
cp BMAKES/BMake.MPI-LINUX BMake.inc
- Edit Bmake.inc according to the openMPI FAQ (except I used mpif90 rather than mpif77).
- Set INTFACE=-DADD_ (this is crucial if you ever want to link it to something!).
- compile the libraries:
make mpi
- compile the tests:
make tester
The libraries reside in BLACS/LIB/*.a and the test executables in BLACS/TESTING/EXE/x*. The tests need to be run using mpirun. All the tests pass (note that the final test is an abort call, which gives a stack trace---this is the correct behaviour and is not an error).
For scaLAPACK:
- Download scalapack.tgz and extract.
- Copy the SLmake.inc.example to SLmake.inc and edit according to the openMPI FAQ (again, I used mpif90 rather than mpif77).
- Specify the locations of your BLAS and LAPACK libraries.
- compile
make
- You can also make tests:
make exe
The library is libscalapack.a in the scaLAPACK home directory and the tests are in the TESTING subdirectory. The tests either pass (or warn about input values) apart from xcqr (which tests the single-precision complex QR factorisation routines: not something I worry about).
mek-quake
GAMESS-US
This will probably work on clust and nimbus too as they are very similar machines.
module add pgi64/7.1-6
You need to edit the scripts comp, compall, lked, and ddi/compddi. The TARGET in all of these should be linux64 and the fortran compiler (FORTRAN) set to pgf77 within the appropriate section. Leave the C compiler (CCOMP) as gcc. It will automatically link in the Portland copy of ACML so you will get a fast blas library. We did not need to change any other options.
In the DDI compilation set COMM to sockets .