<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dw34</id>
	<title>Docswiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dw34"/>
	<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php/Special:Contributions/Dw34"/>
	<updated>2026-04-13T08:08:53Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.7</generator>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1826</id>
		<title>VMD</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1826"/>
		<updated>2026-04-12T07:53:40Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VMD is a molecular visualization program installed on all workstations and clusters. The official documentation can be found [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html here] with some tutorials [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html here]. Like gnuplot however, the wealth of options means that often it takes a long time to find the one command you need to use so below you will find some useful basic settings/info for using VMD. For producing graphics for publication, [[Pymol]] is probably a better option as it has a built in ray-tracing routine but for general visualization, VMD is much quicker.&lt;br /&gt;
&lt;br /&gt;
It is possible to load most files using command line flags, making loading many frames into different topology files easy.  The -f flag indicates that all subsequent files (until the next -f flag or the end) should be loaded into a single molecule.  There are also flags for selecting different file types (default is .pdb), most commonly parm7 for topology files generated by Amber and rst7 for restart files generated by Amber.  mdcrd files are denoted -crd and periodic mdcrd files -crdbox.&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
        vmd -f first_mol.pdb \&lt;br /&gt;
            -f -parm7 second_mol.prmtop -rst7 second_mol.rst \&lt;br /&gt;
            -f -parm7 third_mol.prmtop -crdbox third_mol_1st_frames.crd -crdbox third_mol_2nd_frames&lt;br /&gt;
&lt;br /&gt;
== Rendering Molecules with a Transparent Background ==&lt;br /&gt;
&lt;br /&gt;
Dumping selected frames from a movie or a single image can be achieved using the render option in the vmd gui.&lt;br /&gt;
Choosing the povray option should produce a vmdscene.pov file. This pov file can be converted to a png with a &lt;br /&gt;
transparent background using povray if you must have a white background in vmd:&lt;br /&gt;
&lt;br /&gt;
povray +W829 +H771 -Ivmdscene.pov -Ovmdscene.pov.tga -Otest +Q11 +J +A +FN +UA&lt;br /&gt;
&lt;br /&gt;
The +W and +H values must be taken from the vmdscene.pov file; they should be given in a comment statement at the top.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Movie Making Tips ==&lt;br /&gt;
To load all frames in one go, select the file type in the &amp;quot;Determine file type&amp;quot; box, and then the button &amp;quot;load all at once&amp;quot;&lt;br /&gt;
will not be greyed out, so you can select it. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vmd movie making seems not work properly with step sizes different from one. The last frame is repeated many times. Instead, the&lt;br /&gt;
frames can be selected using sed:&lt;br /&gt;
&lt;br /&gt;
Try extracting frames first with sed: &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~66087,+62939d&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+62939d deletes lines 1 to 62940, deleting 20 frames &lt;br /&gt;
&lt;br /&gt;
The ~66087 repeats the action every 21 frames. The counter operates on the original line numbers. &lt;br /&gt;
&lt;br /&gt;
This example is for a&lt;br /&gt;
system with 3145 atoms, so each frame is 3147 lines with the xyz header. &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~Y,+Xd&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+Xd deletes lines 1 to X+1, so to select every nth frame for frames of length m you need &lt;br /&gt;
&lt;br /&gt;
X=n*m-1 &lt;br /&gt;
&lt;br /&gt;
and Y=(n+1)*m &lt;br /&gt;
&lt;br /&gt;
m is the number of atoms plus two. If the total number of frames is not a multiple of n&lt;br /&gt;
then some frames will be lost at the end. Padding the start and finish of the xyz file with copies of the&lt;br /&gt;
initial and final minimum will make the movie pause at these end points.&lt;br /&gt;
&lt;br /&gt;
To make the movie pause at the start and finish, just duplicate these end points sufficiently. If there are&lt;br /&gt;
slow portions around local minima try adjusting the energy difference parameter on the PATH line in the OPTIM odata file for intial&lt;br /&gt;
generation of path.xyz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* making movies from a &#039;.pdb&#039; file containing multiple structures. &#039;&#039;This is dealt with in the OPTIM section as part of the tutorial on making a movie of a path&#039;&#039;&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1825</id>
		<title>VMD</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1825"/>
		<updated>2026-04-12T07:45:39Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VMD is a molecular visualization program installed on all workstations and clusters. The official documentation can be found [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html here] with some tutorials [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html here]. Like gnuplot however, the wealth of options means that often it takes a long time to find the one command you need to use so below you will find some useful basic settings/info for using VMD. For producing graphics for publication, [[Pymol]] is probably a better option as it has a built in ray-tracing routine but for general visualization, VMD is much quicker.&lt;br /&gt;
&lt;br /&gt;
It is possible to load most files using command line flags, making loading many frames into different topology files easy.  The -f flag indicates that all subsequent files (until the next -f flag or the end) should be loaded into a single molecule.  There are also flags for selecting different file types (default is .pdb), most commonly parm7 for topology files generated by Amber and rst7 for restart files generated by Amber.  mdcrd files are denoted -crd and periodic mdcrd files -crdbox.&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
        vmd -f first_mol.pdb \&lt;br /&gt;
            -f -parm7 second_mol.prmtop -rst7 second_mol.rst \&lt;br /&gt;
            -f -parm7 third_mol.prmtop -crdbox third_mol_1st_frames.crd -crdbox third_mol_2nd_frames&lt;br /&gt;
&lt;br /&gt;
== Rendering Molecules with a Transparent Background ==&lt;br /&gt;
&lt;br /&gt;
Dumping selected frames from a movie or a single image can be achieved using the render option in the vmd gui.&lt;br /&gt;
Choosing the povray option should produce a vmdscene.pov file. This pov file can be converted to a png with a &lt;br /&gt;
transparent background using povray if you must have a white background in vmd:&lt;br /&gt;
&lt;br /&gt;
povray +W829 +H771 -Ivmdscene.pov -Ovmdscene.pov.tga -Otest +Q11 +J +A +FN +UA&lt;br /&gt;
&lt;br /&gt;
The +W and +H values must be taken from the vmdscene.pov file; they should be given in a comment statement at the top.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Movie Making Tips ==&lt;br /&gt;
To load all frames in one go, select the file type in the &amp;quot;Determine file type&amp;quot; box, and then the button &amp;quot;load all at once&amp;quot;&lt;br /&gt;
will not be greyed out, so you can select it. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vmd movie making seems not work properly with step sizes different from one. The last frame is repeated many times. Instead, the&lt;br /&gt;
frames can be selected using sed:&lt;br /&gt;
&lt;br /&gt;
Try extracting frames first with sed: &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~66087,+62939d&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+62939d deletes lines 1 to 62940, deleting 20 frames &lt;br /&gt;
&lt;br /&gt;
The ~66087 repeats the action every 21 frames. The counter operates on the original line numbers. &lt;br /&gt;
&lt;br /&gt;
This example is for a&lt;br /&gt;
system with 3145 atoms, so each frame is 3147 lines with the xyz header. &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~Y,+Xd&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+Xd deletes lines 1 to X+1, so to select every nth frame for frames of length m you need &lt;br /&gt;
&lt;br /&gt;
X=n*m-1 &lt;br /&gt;
&lt;br /&gt;
and Y=(n+1)*m &lt;br /&gt;
&lt;br /&gt;
m is the number of atoms plus two. The total number of frames needs to be a multiple of n.&lt;br /&gt;
If necessary, pad the original file with some extra copies of the last frame.&lt;br /&gt;
&lt;br /&gt;
To make the movie pause at the start and finish, just duplicate these end points sufficiently. If there are&lt;br /&gt;
slow portions around local minima try adjusting the energy difference parameter on the PATH line in the OPTIM odata file for intial&lt;br /&gt;
generation of path.xyz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* making movies from a &#039;.pdb&#039; file containing multiple structures. &#039;&#039;This is dealt with in the OPTIM section as part of the tutorial on making a movie of a path&#039;&#039;&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Building_tleap&amp;diff=1824</id>
		<title>Building tleap</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Building_tleap&amp;diff=1824"/>
		<updated>2026-04-07T14:30:29Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start from the tarball :&lt;br /&gt;
&lt;br /&gt;
bunzip2 AmberTools20.tar.bz2&lt;br /&gt;
tar -xvf AmberTools20.tar&lt;br /&gt;
cd amber20_src&lt;br /&gt;
&lt;br /&gt;
export AMBERHOME=`pwd`&lt;br /&gt;
&lt;br /&gt;
If there are missing packages you may need an install:&lt;br /&gt;
&lt;br /&gt;
apt-get install csh flex gfortran g++ xorg-dev zlib1g-dev libbz2-dev patch python-tk python-matplotlib&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To set up the environment variables use&lt;br /&gt;
source /home/wales/ambertools2/amber20_src/amber.sh&lt;br /&gt;
whichcould go in ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
in ~/ambertools2/amber20_src/AmberTools/src/leap&lt;br /&gt;
&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
now tleap and teleap are in &lt;br /&gt;
&lt;br /&gt;
~/ambertools2/amber20_src/bin&lt;br /&gt;
&lt;br /&gt;
which tleap&lt;br /&gt;
&lt;br /&gt;
~/ambertools2/amber20_src/bin/tleap&lt;br /&gt;
&lt;br /&gt;
now it should be possible to do&lt;br /&gt;
&lt;br /&gt;
tleap -f leap.in&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Building_tleap&amp;diff=1823</id>
		<title>Building tleap</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Building_tleap&amp;diff=1823"/>
		<updated>2026-04-07T14:28:11Z</updated>

		<summary type="html">&lt;p&gt;Dw34: Created page with &amp;quot;source /home/wales/ambertools2/amber20_src/amber.sh   in ~/ambertools2/amber20_src/AmberTools/src/leap  make make install  now tleap and teleap are in   ~/ambertools2/amber20_src/bin  which tleap  ~/ambertools2/amber20_src/bin/tleap  now it should be possible to do  tleap -f leap.in&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;source /home/wales/ambertools2/amber20_src/amber.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
in ~/ambertools2/amber20_src/AmberTools/src/leap&lt;br /&gt;
&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
now tleap and teleap are in &lt;br /&gt;
&lt;br /&gt;
~/ambertools2/amber20_src/bin&lt;br /&gt;
&lt;br /&gt;
which tleap&lt;br /&gt;
&lt;br /&gt;
~/ambertools2/amber20_src/bin/tleap&lt;br /&gt;
&lt;br /&gt;
now it should be possible to do&lt;br /&gt;
&lt;br /&gt;
tleap -f leap.in&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=AMBER&amp;diff=1822</id>
		<title>AMBER</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=AMBER&amp;diff=1822"/>
		<updated>2026-04-07T14:22:58Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Notes on AMBER 12 interface]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Image:Amberpic.jpg|thumb|&amp;quot;The bugs have magically disappeared!&amp;quot;|200px|right]]&lt;br /&gt;
[http://amber.scripps.edu/ &amp;quot;AMBER&amp;quot;] (Assisted Model Building with Energy Refinement) refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos. We mainly use the MM forcefields interfaced with other group software i.e. [[GMIN]] or [[OPTIM]]. The included programmes such as &#039;&#039;sander&#039;&#039; and &#039;&#039;antechamber&#039;&#039; are however, extremely useful in some circumstances! The full user manual for AMBER9 can be found in pdf format [http://amber.scripps.edu/doc9/amber9.pdf here].&lt;br /&gt;
&lt;br /&gt;
As of July 2009, the SVN repository also contains AMBER Tools, the stand alone suite of programs that generate AMBER input files and allow you to analyse output. You can find a manual within the repository. Look in AMBERTOOLS/doc.&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
&lt;br /&gt;
* [[Using AMBER 14 on the GPU and compute clusters]]&lt;br /&gt;
* [http://amber.scripps.edu/tutorials/ Ross Walker&#039;s AMBER9 tutorials] - recommended reading for &#039;&#039;&#039;ANYONE&#039;&#039;&#039; using AMBER!&lt;br /&gt;
* [[Generating parameters using AMBER&#039;s built in General Forcefield (gaff)]]&lt;br /&gt;
* [[Generating parameters using RESP charges from GAMESS-US]]&lt;br /&gt;
* [[Building tleap]] &lt;br /&gt;
* [[Simple scripts for LEaP to create topology and coordinate files]] &lt;br /&gt;
* [[Preparing an AMBER topology file for a protein system]] - step by step guide&lt;br /&gt;
* [[Setting up]] - step by step guide to prepare and then symmetrise a simple (protein-only) system&lt;br /&gt;
* [[Preparing an AMBER topology file for a protein plus ligand system]] - step by step guide&lt;br /&gt;
* [[Symmetrising AMBER topology files]] - step by step guide for symmetrising a complex protein+ligand system&lt;br /&gt;
* [[Producing a PDB from a coordinates and topology file]] - using &#039;&#039;amdpdb&#039;&#039;&lt;br /&gt;
* [[Running GMIN with MD move steps AMBER]]&lt;br /&gt;
* [[Evaluating different components of AMBER energy function with SANDER]]&lt;br /&gt;
* [[Running MD with AMBER]]&lt;br /&gt;
* [[Running MD on GPUS with pmemd_cuda]]&lt;br /&gt;
* [[REMD with AMBER]]&lt;br /&gt;
* [[Performing a hydrogen-bond analysis]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1818</id>
		<title>Compiling Wales Group codes using cmake</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1818"/>
		<updated>2025-01-17T21:27:59Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* Compiling with MPI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.cmake.org/ CMake] (Cross-platform Make) provides a simple, platform independent way for us to compile and test the group codebase. Dependencies are handled automatically, compilation can proceed in parallel to avoid long waits while testing changes and builds are done entirely outside of the source directory. It also enables us to use the [[Jenkins CI]] &#039;build bot&#039; system to automatically compile and test the code on a nightly basis - helping us catch troublesome commits before they affect other users. &lt;br /&gt;
&lt;br /&gt;
Although everything below refers to compiling [[GMIN]] with the Intel &#039;&#039;ifort&#039;&#039; compiler and AMBER9 - the exact same procedure works for [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
&lt;br /&gt;
Note that not every option for our codes is expected to actually compile with every compiler, for example, anything using CHARMM35/36 will not compile with &#039;&#039;nagfor&#039;&#039; or &#039;&#039;gfortran&#039;&#039;. This is nothing to do with our code - it&#039;s a CHARMM issue. You can get an idea for what should work by looking at the automated [[Jenkins CI]] builds.&lt;br /&gt;
&lt;br /&gt;
==Preparing to compile==&lt;br /&gt;
Before you get started, you need to ensure that the machine you are planning to compile on has cmake 2.8 or higher installed. You can check the current version like so:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmake --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The clusters have a module for cmake 3.0 (cmake 3.6.2 on Nest), which you can load using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You also need to create a directory to build the code in. We suggest that you create a directory for the compiler you are using within the program directory, under a subdirectory called &#039;builds&#039; - for example for compiling GMIN with ifort, you would make a directory here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
cd ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can call these directories whatever you like - but make sure it is clear to you what they contain! You might also want to check which version of the compiler you have loaded. This is important as the different clusters and workstations may have different default versions loaded, some of which might not work properly. You can check the compiler version currently loaded using the same &#039;--version&#039; flag we used for &#039;&#039;cmake&#039;&#039; above:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ifort --version&lt;br /&gt;
ifort (IFORT) 12.1.3 20120212&lt;br /&gt;
Copyright (C) 1985-2012 Intel Corporation.  All rights reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load a different compiler, you can use the &#039;&#039;module load&#039;&#039; or &#039;&#039;module swap&#039;&#039; commands. A list of all available modules can be accessed using:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module av&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are having problems compiling, one of the first things to check is whether it works with a different version of the compiler!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: When compiling GMIN, if you are getting the error that there is no implicit type for ERFC in ewald.f90, try using a newer version of your compiler. This should be the built-in complementary error function.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling using the ccmake GUI interface to set options==&lt;br /&gt;
[[Image:Ccmake.png|thumb|ccmake set up to compile A9GMIN|200px|right]]&lt;br /&gt;
&lt;br /&gt;
One advantage using cmake has over make is that we can use the simple ccmake GUI. This interface lets us set options like compiling with AMBER9, or CHARMM35, toggle between &#039;Release&#039; and &#039;Debug&#039; builds (see below) - and examine and alter the flags being uses for the compilation if we wish. Before we can run ccmake, we need to specify the compiler and run cmake in our build directory (e.g. softwarewales/GMIN/builds/ifort). We specify the &#039;&#039;&#039;F&#039;&#039;&#039;ortran &#039;&#039;&#039;C&#039;&#039;&#039;ompiler by setting the &#039;&#039;&#039;$FC&#039;&#039;&#039; environment variable (in this case the Intel Fortran compiler, ifort), and then run &#039;&#039;cmake&#039;&#039; (on the command line), passing it the relative location of the [[GMIN]] source directory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake ../../source&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you run &#039;&#039;ls&#039;&#039;, you will see some cmake files have been generated:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
CMakeCache.txt  CMakeFiles  cmake_install.cmake  Makefile  modules&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now run &#039;&#039;ccmake&#039;&#039; to open the GUI:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ccmake .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To navigate between options, use the arrow keys. Options can be toggled by pressing Return. To compile [[GMIN]] with AMBER9 (A9GMIN), we need to toggle the &#039;&#039;WITH_AMBER&#039;&#039; option &#039;&#039;ON&#039;&#039;. Once you have done this, you need to configure and generate appropriate cmake info. This is done by pressing &#039;c&#039; to configure, &#039;e&#039; to exit and then &#039;g&#039; to generate.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: for some builds (CHARMM with DFTB and CUDAGMIN), you might need to configure, exit and generate twice to set all necessary options&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can now compile A9GMIN in parallel as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;-j8&#039; flag here tells make to use up to 8 &#039;threads&#039; when building. For optimal performance, you should keep this slightly greater than the number of cores (CPUs) the node you are working on has. If all goes well, you should now have an A9GMIN binary in your build directory - congratulations! &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Linking Fortran executable A9GMIN&lt;br /&gt;
[100%] Built target A9GMIN&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------------------- 15:23:45&lt;br /&gt;
&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
A9GMIN          cmake_install.cmake   libcudadummylib.a  libmylapack.a  NAB&lt;br /&gt;
AMBER           display_version.f90   libdummylib.a      Makefile       nab_binaries_built&lt;br /&gt;
CMakeCache.txt  GMIN                  libgminlib.a       modules        porfuncs.f90&lt;br /&gt;
CMakeFiles      libamber12dummylib.a  libmyblas.a        n&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Plain [[GMIN]] is also built at the same time should you need it. You can move this into your ~/bin directory if you like, or anywhere else in your &#039;&#039;$PATH&#039;&#039; to make running it simple.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: If you want to use OPTIM with the new C++ implementation of the NEB routine, you will need to obtain the source code for that separately. See [https://wikis.ch.cam.ac.uk/wales/wiki/index.php/OPTIM here] for instructions.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling by setting options on the command line==&lt;br /&gt;
If you know the options you&#039;d like to set already (you can see them all in ccmake), you can save some time by passing them directly to &#039;&#039;cmake&#039;&#039; on the command line, bypassing the need for &#039;&#039;ccmake&#039;&#039;. For example, to compile A9GMIN (GMIN with the AMBER9 interface) using the Intel ifort compiler, you would run the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &#039;../../source&#039; is the relative location of the GMIN source directory. You can find some more examples of compiling from the command line below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: Sometimes you may get error&#039;&#039;&#039; (for example, Fatal Error: Can&#039;t open module file &#039;someModule.mod&#039; for reading at (1): No such file or directory) when following this procedure. In that case there are three things you could try: make sure you are building in a new directory, if that does not help run `make VERBOSE=1` instead of `make -j8` or simply switch to using ccmake.&lt;br /&gt;
&lt;br /&gt;
==Compiling with MPI==&lt;br /&gt;
To compile with MPI support add the following flags when running cmake on the command line for ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpiifort cmake ~/softwarewales/GMIN/source/ -DWITH_MPI=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On nest this command line can be used to build A20GMIN for BHPT runs with AMBER20 when WITH_AMBER20&lt;br /&gt;
is employed via ccmake or -DWITH_AMBER20=yes. Modules that work are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load gcc/7.5.0 cmake/3.23.2 ifort/64/2020/4/304 mpi/intel/2023.1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding gfortran build on nest requires:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpifort CC=mpicc CXX=mpicxx cmake ~/softwarewales/GMIN/source/  -DBLAS_LIBRARIES=/lib64/libopenblas.so.0 DCOMPILER_SWITCH=gfortran&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load gcc/7.5.0 cmake/3.23.2 mpi/openmpi/gnu7/4.1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above examples with ifort and gfortran should work with MYBLAS and MYLAPACK turned off, and the system blas library&lt;br /&gt;
is significantly faster.&lt;br /&gt;
&lt;br /&gt;
An older pgi build used:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpif90 CC=mpicc cmake ../source -DCOMPILER_SWITCH=pgi -DWITH_MPI=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here -DCOMPILER_SWITCH=pgi assumes you&#039;re using the Portland &#039;&#039;pgi&#039;&#039; compiler. Make sure you have the correct modules loaded (in this case &#039;&#039;pgi&#039;&#039; and &#039;&#039;mpi-pgi&#039;&#039;), and that the particular mpi you want (in this case &#039;&#039;mpi-pgi&#039;&#039;) is listed before any other mpi&#039;s loaded (so that it has the highest priority). The modules can be loaded by typing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/64/&lt;br /&gt;
module load mpi/openmpi/pgi/64/1.6.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and you can check which modules are loaded and in which order/priority by the &#039;&#039;module list&#039;&#039; command. You may need to &#039;&#039;module unload &amp;lt;name&amp;gt;&#039;&#039; any other mpi&#039;s that are higher up in the list than the one you want. You can of course set the COMPILER_SWITCH and WITH_MPI flags in &#039;&#039;ccmake&#039;&#039; if you prefer.&lt;br /&gt;
&lt;br /&gt;
Note: It has been observed that pgi/64/15.1 leads to compilation errors, and for now, it is best to use pgi/64/14.9&lt;br /&gt;
&lt;br /&gt;
==Advanced mode - changing compiler flags with ccmake==&lt;br /&gt;
[[Image:Ccmakeadvanced.png|thumb|ccmake advanced mode|200px|right]]&lt;br /&gt;
&lt;br /&gt;
Although initially the &#039;&#039;ccmake&#039;&#039; GUI looks very simple, there is a lot going on under the hood. By pressing &#039;t&#039; you can enter &#039;Advanced mode&#039; which will show you all of the hidden options, for example the compiler flags that are being passed to &#039;&#039;make&#039;&#039; when you compile the code. You can also make changes to the flags here, for example if you would like to add &#039;-p&#039; to do  profiling. &lt;br /&gt;
&lt;br /&gt;
As with changing the build type, you simply select the field you&#039;d like to change using the arrow keys, press Return, make your changes and press Return again to save them. When you subsequently configure and generate as above, those altered flag will be used for the subsequent compilation.&lt;br /&gt;
&lt;br /&gt;
Note that these changes only apply in the build directory in which you make them.&lt;br /&gt;
&lt;br /&gt;
==Debugging runtime problems using gdb or valgrind==&lt;br /&gt;
If you are getting a segmentation fault, crash or other unexpected behaviour, you might want to run your job through a debugger like [http://www.gnu.org/software/gdb/ gdb] or [http://valgrind.org/ valgrind]. In order to maximise your chances of getting useful output, you should build a &#039;Debug&#039; version of the program you are having trouble. To do this, you can either change the &#039;&#039;CMAKE_BUILD_TYPE&#039;&#039; in &#039;&#039;ccmake&#039;&#039; to &#039;Debug&#039; (press Return, change &#039;Release&#039; to &#039;Debug&#039; and press Return again), or on the command line like so for GMIN with AMBER 9 using the Intel ifort compiler:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can then run the binary &#039;&#039;through&#039;&#039; gdb or valgrind as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gdb A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
valgrind A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I won&#039;t cover debugging with these tools here as it&#039;s a science in itself! Do some Googling and ask for help as needed :)&lt;br /&gt;
&lt;br /&gt;
==Debugging compilation problems==&lt;br /&gt;
There are many ways to try and track down why your code is not compiling. Before you start changing compilers, building a &#039;Debug&#039; version or changing machines, you might want to try running make again with the &#039;&#039;VERBOSE&#039;&#039; option enabled. This will dump a lot of potentially useful output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VERBOSE=1 make&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One possible gotcha: all .f and .f90 files in the relevant source directories will be compiled and added to a library. This is quite different from the old Makefile way of doing things, where source files were explicitly specified for compilation (via their corresponding .o file). So, if you are testing something by for instance copying code.f90 to code.myhack.f90 and code.orig.f90, then slightly editing a line or two of code.myhack.f90 and copying it back to code.f90 for use, this will probably cause linking problems due to multiply-defined subroutines (from all three files). The solution, if you must have alternative versions of the same file hanging round, is to differentiate the filenames by a suffix AFTER the .f[90] .&lt;br /&gt;
 &lt;br /&gt;
Another occasional issue is the unexplained compiler bug - a problem with the version of the compiler you happen to be using. You can can an idea for which compiler versions we expect to work by checking the Jenkins build-bot output, as described in the &#039;Seeing console output&#039; section of the [[Jenkins CI]] page. If you are using a different version of the compiler in question, consider swapping to the version Jenkins is using with &#039;module swap&#039;.&lt;br /&gt;
&lt;br /&gt;
If the error message you are getting doesn&#039;t make sense to you after some Googling, go and ask someone - we all have these problems. Things you can try first include trying a different compiler version, or an entirely different compiler e.g. pgi rather than ifort for example. You should bear in mind that as mentioned above, not all versions of each code will compile with every compiler. Make sure you&#039;re not trying to build something that isn&#039;t expected to work.&lt;br /&gt;
&lt;br /&gt;
To build the executables with the QUIP interface, it may be necessary to do make clean in the QUIP directory.&lt;br /&gt;
&lt;br /&gt;
==Extra command line build examples==&lt;br /&gt;
The below commands are absolutely not an exhaustive list, but should give you an idea of what is possible. You can use &#039;&#039;ccmake&#039;&#039; as described above to discover which variables (e.g. WITH_AMBER) can be manipulated on the command line like this. All of these examples assume your git repository is set up in &#039;&#039;/home/CRSID/softwarewales&#039;&#039; - make the appropriate modifications if you have it elsewhere.&lt;br /&gt;
&lt;br /&gt;
===GMIN===&lt;br /&gt;
&#039;&#039;&#039;A12GMIN&#039;&#039;&#039; (GMIN with AMBER12) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_amber12&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER12=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35GMIN&#039;&#039;&#039; (GMIN with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAGMIN&#039;&#039;&#039; (GMIN leveraging GPU minimisation via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_cuda&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBGMIN&#039;&#039;&#039; (GMIN with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OPTIM===&lt;br /&gt;
&#039;&#039;&#039;A9OPTIM&#039;&#039;&#039; (OPTIM with AMBER9) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_amber&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER9=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35OPTIM&#039;&#039;&#039; (OPTIM with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAOPTIM&#039;&#039;&#039; (OPTIM leveraging GPU via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_cuda5.5&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN and OPTIM with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBOPTIM&#039;&#039;&#039; (OPTIM with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PATHSAMPLE===&lt;br /&gt;
There are very few options for [[PATHSAMPLE]] as we don&#039;t need to worry about interfacing with a particular potential. As a result, every binary is simply called &#039;&#039;PATHSAMPLE&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Using nagfor (the NAG fortran compiler - check you have the module loaded - very strict!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/nagfor&lt;br /&gt;
cd !$&lt;br /&gt;
FC=nagfor cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using pgi (much more generous with coding slips/non-standard uses):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/pgi&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring defaults - for developers==&lt;br /&gt;
&lt;br /&gt;
Fortran compilers and their corresponding default settings are all controlled by the file $SVN/CMakeModules/FindFORTRANCOMPILER.cmake ($SVN is your svn root directory). In particular, we may wish to edit the flags used for each set of compilers and build type. These are contained in the following block:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
   message(&amp;quot;Setting initial values for compiler flags&amp;quot;)&lt;br /&gt;
   if(COMPILER_SWITCH MATCHES &amp;quot;pgi&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-Mextend&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3 -Munroll -Mnoframe&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Mdwarf3 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-Mfree&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;gfortran&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-ffixed-line-length-200 -ffree-line-length-0&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
#      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fimplicit-none -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;${CMAKE_Fortran_FLAGS_DEBUG} -fimplicit-none&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-ffree-form&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;nag&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -kind=byte -maxcontin=3000&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-mismatch_all -O4&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-C=all -mtrace=all -gline -g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING) # js850&amp;gt; is this ever used?&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;ifort&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -heap-arrays -assume byterecl&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
# Warnings about temporary argument creation and edit descriptor widths are disabled with the final flags.&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-C -g -traceback -debug full -check all,noarg_temp_created -diag-disable 8290,8291&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-debug all -check all -implicitnone -warn unused -fp-stack-check -ftrapuv -check pointers -check bounds&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   else()&lt;br /&gt;
      message(FATAL_ERROR &amp;quot;unknown comiler switch: ${COMPILER_SWITCH}&amp;quot;)&lt;br /&gt;
   endif()&lt;br /&gt;
    SET(COMPILER_FLAGS_WERE_SET yes CACHE TYPE INTERNAL)&lt;br /&gt;
endif(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main if/elseif blocks correspond to compiler switches. Inside these, there are the default flags for each of our build types (release, debug and debug_slow), which are configured using ccmake. These can be edited, if we wish to change the default behaviour (e.g. a recent addition of -check all,noarg_temp_created -diag-disable 8290,8291 to disable annoying warning messages for ifort).&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1817</id>
		<title>Compiling Wales Group codes using cmake</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1817"/>
		<updated>2025-01-17T21:26:08Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.cmake.org/ CMake] (Cross-platform Make) provides a simple, platform independent way for us to compile and test the group codebase. Dependencies are handled automatically, compilation can proceed in parallel to avoid long waits while testing changes and builds are done entirely outside of the source directory. It also enables us to use the [[Jenkins CI]] &#039;build bot&#039; system to automatically compile and test the code on a nightly basis - helping us catch troublesome commits before they affect other users. &lt;br /&gt;
&lt;br /&gt;
Although everything below refers to compiling [[GMIN]] with the Intel &#039;&#039;ifort&#039;&#039; compiler and AMBER9 - the exact same procedure works for [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
&lt;br /&gt;
Note that not every option for our codes is expected to actually compile with every compiler, for example, anything using CHARMM35/36 will not compile with &#039;&#039;nagfor&#039;&#039; or &#039;&#039;gfortran&#039;&#039;. This is nothing to do with our code - it&#039;s a CHARMM issue. You can get an idea for what should work by looking at the automated [[Jenkins CI]] builds.&lt;br /&gt;
&lt;br /&gt;
==Preparing to compile==&lt;br /&gt;
Before you get started, you need to ensure that the machine you are planning to compile on has cmake 2.8 or higher installed. You can check the current version like so:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmake --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The clusters have a module for cmake 3.0 (cmake 3.6.2 on Nest), which you can load using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You also need to create a directory to build the code in. We suggest that you create a directory for the compiler you are using within the program directory, under a subdirectory called &#039;builds&#039; - for example for compiling GMIN with ifort, you would make a directory here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
cd ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can call these directories whatever you like - but make sure it is clear to you what they contain! You might also want to check which version of the compiler you have loaded. This is important as the different clusters and workstations may have different default versions loaded, some of which might not work properly. You can check the compiler version currently loaded using the same &#039;--version&#039; flag we used for &#039;&#039;cmake&#039;&#039; above:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ifort --version&lt;br /&gt;
ifort (IFORT) 12.1.3 20120212&lt;br /&gt;
Copyright (C) 1985-2012 Intel Corporation.  All rights reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load a different compiler, you can use the &#039;&#039;module load&#039;&#039; or &#039;&#039;module swap&#039;&#039; commands. A list of all available modules can be accessed using:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module av&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are having problems compiling, one of the first things to check is whether it works with a different version of the compiler!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: When compiling GMIN, if you are getting the error that there is no implicit type for ERFC in ewald.f90, try using a newer version of your compiler. This should be the built-in complementary error function.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling using the ccmake GUI interface to set options==&lt;br /&gt;
[[Image:Ccmake.png|thumb|ccmake set up to compile A9GMIN|200px|right]]&lt;br /&gt;
&lt;br /&gt;
One advantage using cmake has over make is that we can use the simple ccmake GUI. This interface lets us set options like compiling with AMBER9, or CHARMM35, toggle between &#039;Release&#039; and &#039;Debug&#039; builds (see below) - and examine and alter the flags being uses for the compilation if we wish. Before we can run ccmake, we need to specify the compiler and run cmake in our build directory (e.g. softwarewales/GMIN/builds/ifort). We specify the &#039;&#039;&#039;F&#039;&#039;&#039;ortran &#039;&#039;&#039;C&#039;&#039;&#039;ompiler by setting the &#039;&#039;&#039;$FC&#039;&#039;&#039; environment variable (in this case the Intel Fortran compiler, ifort), and then run &#039;&#039;cmake&#039;&#039; (on the command line), passing it the relative location of the [[GMIN]] source directory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake ../../source&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you run &#039;&#039;ls&#039;&#039;, you will see some cmake files have been generated:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
CMakeCache.txt  CMakeFiles  cmake_install.cmake  Makefile  modules&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now run &#039;&#039;ccmake&#039;&#039; to open the GUI:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ccmake .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To navigate between options, use the arrow keys. Options can be toggled by pressing Return. To compile [[GMIN]] with AMBER9 (A9GMIN), we need to toggle the &#039;&#039;WITH_AMBER&#039;&#039; option &#039;&#039;ON&#039;&#039;. Once you have done this, you need to configure and generate appropriate cmake info. This is done by pressing &#039;c&#039; to configure, &#039;e&#039; to exit and then &#039;g&#039; to generate.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: for some builds (CHARMM with DFTB and CUDAGMIN), you might need to configure, exit and generate twice to set all necessary options&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can now compile A9GMIN in parallel as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;-j8&#039; flag here tells make to use up to 8 &#039;threads&#039; when building. For optimal performance, you should keep this slightly greater than the number of cores (CPUs) the node you are working on has. If all goes well, you should now have an A9GMIN binary in your build directory - congratulations! &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Linking Fortran executable A9GMIN&lt;br /&gt;
[100%] Built target A9GMIN&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------------------- 15:23:45&lt;br /&gt;
&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
A9GMIN          cmake_install.cmake   libcudadummylib.a  libmylapack.a  NAB&lt;br /&gt;
AMBER           display_version.f90   libdummylib.a      Makefile       nab_binaries_built&lt;br /&gt;
CMakeCache.txt  GMIN                  libgminlib.a       modules        porfuncs.f90&lt;br /&gt;
CMakeFiles      libamber12dummylib.a  libmyblas.a        n&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Plain [[GMIN]] is also built at the same time should you need it. You can move this into your ~/bin directory if you like, or anywhere else in your &#039;&#039;$PATH&#039;&#039; to make running it simple.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: If you want to use OPTIM with the new C++ implementation of the NEB routine, you will need to obtain the source code for that separately. See [https://wikis.ch.cam.ac.uk/wales/wiki/index.php/OPTIM here] for instructions.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling by setting options on the command line==&lt;br /&gt;
If you know the options you&#039;d like to set already (you can see them all in ccmake), you can save some time by passing them directly to &#039;&#039;cmake&#039;&#039; on the command line, bypassing the need for &#039;&#039;ccmake&#039;&#039;. For example, to compile A9GMIN (GMIN with the AMBER9 interface) using the Intel ifort compiler, you would run the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &#039;../../source&#039; is the relative location of the GMIN source directory. You can find some more examples of compiling from the command line below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: Sometimes you may get error&#039;&#039;&#039; (for example, Fatal Error: Can&#039;t open module file &#039;someModule.mod&#039; for reading at (1): No such file or directory) when following this procedure. In that case there are three things you could try: make sure you are building in a new directory, if that does not help run `make VERBOSE=1` instead of `make -j8` or simply switch to using ccmake.&lt;br /&gt;
&lt;br /&gt;
==Compiling with MPI==&lt;br /&gt;
To compile with MPI support add the following flags when running cmake on the command line for ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpiifort cmake ~/softwarewales/GMIN/source/ -DWITH_MPI=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
On nest this command line can be used to build A20GMIN for BHPT runs with AMBER20 when WITH_AMBER20&lt;br /&gt;
is employed via ccmake or -DWITH_AMBER20=yes. Modules that work are:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load gcc/7.5.0             cmake/3.23.2          ifort/64/2020/4/304   mpi/intel/2023.1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The corresponding gfortran build on nest requires:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpifort CC=mpicc CXX=mpicxx cmake ~/softwarewales/GMIN/source/  -DBLAS_LIBRARIES=/lib64/libopenblas.so.0 DCOMPILER_SWITCH=gfortran&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load gcc/7.5.0             cmake/3.23.2          mpi/openmpi/gnu7/4.1.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An older pgi build used:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpif90 CC=mpicc cmake ../source -DCOMPILER_SWITCH=pgi -DWITH_MPI=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here -DCOMPILER_SWITCH=pgi assumes you&#039;re using the Portland &#039;&#039;pgi&#039;&#039; compiler. Make sure you have the correct modules loaded (in this case &#039;&#039;pgi&#039;&#039; and &#039;&#039;mpi-pgi&#039;&#039;), and that the particular mpi you want (in this case &#039;&#039;mpi-pgi&#039;&#039;) is listed before any other mpi&#039;s loaded (so that it has the highest priority). The modules can be loaded by typing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/64/&lt;br /&gt;
module load mpi/openmpi/pgi/64/1.6.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and you can check which modules are loaded and in which order/priority by the &#039;&#039;module list&#039;&#039; command. You may need to &#039;&#039;module unload &amp;lt;name&amp;gt;&#039;&#039; any other mpi&#039;s that are higher up in the list than the one you want. You can of course set the COMPILER_SWITCH and WITH_MPI flags in &#039;&#039;ccmake&#039;&#039; if you prefer.&lt;br /&gt;
&lt;br /&gt;
Note: It has been observed that pgi/64/15.1 leads to compilation errors, and for now, it is best to use pgi/64/14.9&lt;br /&gt;
&lt;br /&gt;
==Advanced mode - changing compiler flags with ccmake==&lt;br /&gt;
[[Image:Ccmakeadvanced.png|thumb|ccmake advanced mode|200px|right]]&lt;br /&gt;
&lt;br /&gt;
Although initially the &#039;&#039;ccmake&#039;&#039; GUI looks very simple, there is a lot going on under the hood. By pressing &#039;t&#039; you can enter &#039;Advanced mode&#039; which will show you all of the hidden options, for example the compiler flags that are being passed to &#039;&#039;make&#039;&#039; when you compile the code. You can also make changes to the flags here, for example if you would like to add &#039;-p&#039; to do  profiling. &lt;br /&gt;
&lt;br /&gt;
As with changing the build type, you simply select the field you&#039;d like to change using the arrow keys, press Return, make your changes and press Return again to save them. When you subsequently configure and generate as above, those altered flag will be used for the subsequent compilation.&lt;br /&gt;
&lt;br /&gt;
Note that these changes only apply in the build directory in which you make them.&lt;br /&gt;
&lt;br /&gt;
==Debugging runtime problems using gdb or valgrind==&lt;br /&gt;
If you are getting a segmentation fault, crash or other unexpected behaviour, you might want to run your job through a debugger like [http://www.gnu.org/software/gdb/ gdb] or [http://valgrind.org/ valgrind]. In order to maximise your chances of getting useful output, you should build a &#039;Debug&#039; version of the program you are having trouble. To do this, you can either change the &#039;&#039;CMAKE_BUILD_TYPE&#039;&#039; in &#039;&#039;ccmake&#039;&#039; to &#039;Debug&#039; (press Return, change &#039;Release&#039; to &#039;Debug&#039; and press Return again), or on the command line like so for GMIN with AMBER 9 using the Intel ifort compiler:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can then run the binary &#039;&#039;through&#039;&#039; gdb or valgrind as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gdb A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
valgrind A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I won&#039;t cover debugging with these tools here as it&#039;s a science in itself! Do some Googling and ask for help as needed :)&lt;br /&gt;
&lt;br /&gt;
==Debugging compilation problems==&lt;br /&gt;
There are many ways to try and track down why your code is not compiling. Before you start changing compilers, building a &#039;Debug&#039; version or changing machines, you might want to try running make again with the &#039;&#039;VERBOSE&#039;&#039; option enabled. This will dump a lot of potentially useful output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VERBOSE=1 make&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One possible gotcha: all .f and .f90 files in the relevant source directories will be compiled and added to a library. This is quite different from the old Makefile way of doing things, where source files were explicitly specified for compilation (via their corresponding .o file). So, if you are testing something by for instance copying code.f90 to code.myhack.f90 and code.orig.f90, then slightly editing a line or two of code.myhack.f90 and copying it back to code.f90 for use, this will probably cause linking problems due to multiply-defined subroutines (from all three files). The solution, if you must have alternative versions of the same file hanging round, is to differentiate the filenames by a suffix AFTER the .f[90] .&lt;br /&gt;
 &lt;br /&gt;
Another occasional issue is the unexplained compiler bug - a problem with the version of the compiler you happen to be using. You can can an idea for which compiler versions we expect to work by checking the Jenkins build-bot output, as described in the &#039;Seeing console output&#039; section of the [[Jenkins CI]] page. If you are using a different version of the compiler in question, consider swapping to the version Jenkins is using with &#039;module swap&#039;.&lt;br /&gt;
&lt;br /&gt;
If the error message you are getting doesn&#039;t make sense to you after some Googling, go and ask someone - we all have these problems. Things you can try first include trying a different compiler version, or an entirely different compiler e.g. pgi rather than ifort for example. You should bear in mind that as mentioned above, not all versions of each code will compile with every compiler. Make sure you&#039;re not trying to build something that isn&#039;t expected to work.&lt;br /&gt;
&lt;br /&gt;
To build the executables with the QUIP interface, it may be necessary to do make clean in the QUIP directory.&lt;br /&gt;
&lt;br /&gt;
==Extra command line build examples==&lt;br /&gt;
The below commands are absolutely not an exhaustive list, but should give you an idea of what is possible. You can use &#039;&#039;ccmake&#039;&#039; as described above to discover which variables (e.g. WITH_AMBER) can be manipulated on the command line like this. All of these examples assume your git repository is set up in &#039;&#039;/home/CRSID/softwarewales&#039;&#039; - make the appropriate modifications if you have it elsewhere.&lt;br /&gt;
&lt;br /&gt;
===GMIN===&lt;br /&gt;
&#039;&#039;&#039;A12GMIN&#039;&#039;&#039; (GMIN with AMBER12) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_amber12&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER12=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35GMIN&#039;&#039;&#039; (GMIN with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAGMIN&#039;&#039;&#039; (GMIN leveraging GPU minimisation via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_cuda&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBGMIN&#039;&#039;&#039; (GMIN with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OPTIM===&lt;br /&gt;
&#039;&#039;&#039;A9OPTIM&#039;&#039;&#039; (OPTIM with AMBER9) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_amber&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER9=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35OPTIM&#039;&#039;&#039; (OPTIM with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAOPTIM&#039;&#039;&#039; (OPTIM leveraging GPU via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_cuda5.5&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN and OPTIM with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBOPTIM&#039;&#039;&#039; (OPTIM with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PATHSAMPLE===&lt;br /&gt;
There are very few options for [[PATHSAMPLE]] as we don&#039;t need to worry about interfacing with a particular potential. As a result, every binary is simply called &#039;&#039;PATHSAMPLE&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Using nagfor (the NAG fortran compiler - check you have the module loaded - very strict!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/nagfor&lt;br /&gt;
cd !$&lt;br /&gt;
FC=nagfor cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using pgi (much more generous with coding slips/non-standard uses):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/pgi&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring defaults - for developers==&lt;br /&gt;
&lt;br /&gt;
Fortran compilers and their corresponding default settings are all controlled by the file $SVN/CMakeModules/FindFORTRANCOMPILER.cmake ($SVN is your svn root directory). In particular, we may wish to edit the flags used for each set of compilers and build type. These are contained in the following block:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
   message(&amp;quot;Setting initial values for compiler flags&amp;quot;)&lt;br /&gt;
   if(COMPILER_SWITCH MATCHES &amp;quot;pgi&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-Mextend&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3 -Munroll -Mnoframe&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Mdwarf3 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-Mfree&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;gfortran&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-ffixed-line-length-200 -ffree-line-length-0&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
#      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fimplicit-none -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;${CMAKE_Fortran_FLAGS_DEBUG} -fimplicit-none&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-ffree-form&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;nag&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -kind=byte -maxcontin=3000&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-mismatch_all -O4&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-C=all -mtrace=all -gline -g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING) # js850&amp;gt; is this ever used?&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;ifort&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -heap-arrays -assume byterecl&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
# Warnings about temporary argument creation and edit descriptor widths are disabled with the final flags.&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-C -g -traceback -debug full -check all,noarg_temp_created -diag-disable 8290,8291&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-debug all -check all -implicitnone -warn unused -fp-stack-check -ftrapuv -check pointers -check bounds&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   else()&lt;br /&gt;
      message(FATAL_ERROR &amp;quot;unknown comiler switch: ${COMPILER_SWITCH}&amp;quot;)&lt;br /&gt;
   endif()&lt;br /&gt;
    SET(COMPILER_FLAGS_WERE_SET yes CACHE TYPE INTERNAL)&lt;br /&gt;
endif(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main if/elseif blocks correspond to compiler switches. Inside these, there are the default flags for each of our build types (release, debug and debug_slow), which are configured using ccmake. These can be edited, if we wish to change the default behaviour (e.g. a recent addition of -check all,noarg_temp_created -diag-disable 8290,8291 to disable annoying warning messages for ifort).&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Git_Workflow&amp;diff=1816</id>
		<title>Git Workflow</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Git_Workflow&amp;diff=1816"/>
		<updated>2024-07-22T20:24:21Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Group software and papers in the process of being written are stored on the University&#039;s GitLab repositories [https://gitlab.developers.cam.ac.uk/]. You should be able to log in via Raven, but someone with privileges will need to add you to the two projects. This page describes a typical workflow for retrieving, modifying and updating the repositories. It is not, however, a comprehensive guide to Git. For that, consult your favourite web search engine.&lt;br /&gt;
&lt;br /&gt;
==Setting up SSH access==&lt;br /&gt;
&lt;br /&gt;
To smoothly access Gitlab without having to type your user name and password the whole time, set up an SSH key. In a terminal on your desktop, type&lt;br /&gt;
&lt;br /&gt;
  $ ssh-keygen -t ed25519 -C &amp;quot;GitLab&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If it complains about overwriting, then you have already done this step and you probably don&#039;t want to overwrite. It will ask you where to save the key: the default location should be fine. Now you are prompted for a passphrase. You can press ENTER to leave it blank, although that does mean that anyone who breaks into your computer can get access to GitLab with no further effort. You now have a file (default location is ~/.ssh/id_ed25519.pub) that contains your public key. Copy the entire contents of this file. &lt;br /&gt;
&lt;br /&gt;
On the GitLab website, click on your user icon in the top right and select &#039;Settings&#039; and then &#039;SSH Keys&#039; from the left menu. Paste the contents of your public key file into the box. Put something useful in the title, like the name of your desktop (yes, you should probably do this for each machine you want to access GitLab from, rather than copying keys between machines). Optionally, you can insert an expiry date, such as the date your funding runs out. Click the &#039;Add key&#039; button.&lt;br /&gt;
&lt;br /&gt;
To test that you now have access, in a terminal type&lt;br /&gt;
&lt;br /&gt;
  $ ssh -T git@gitlab.developers.cam.ac.uk&lt;br /&gt;
&lt;br /&gt;
After accepting the RSA identity, you should see a welcome message and then the connection will close.&lt;br /&gt;
&lt;br /&gt;
Some users have reported problems with using ed25519. RSA is available as an alternative. Generate an RSA key with&lt;br /&gt;
&lt;br /&gt;
  $ ssh-keygen -t rsa -C &amp;quot;GitLab&amp;quot;&lt;br /&gt;
&lt;br /&gt;
which will be saved by default at ~/.ssh/id_rsa.pub. Follow the same steps as above to add your public key to Gitlab.&lt;br /&gt;
&lt;br /&gt;
==Installing git LFS==&lt;br /&gt;
&lt;br /&gt;
Our software repository (git@gitlab.developers.cam.ac.uk:ch/wales/softwarewales.git) uses git Large File Storage (LFS) to manage some of the larger files. You must have the git LFS addon installed and initialised before cloning the software repository. If you do not, the clone will appear to succeed, but you will be missing some files. If you are using a department managed workstation, or any cluster other than rogue or nest, you will need to load a newer version of git:&lt;br /&gt;
&lt;br /&gt;
  $ module load git/2.0.0&lt;br /&gt;
&lt;br /&gt;
Replace 2.0.0 with whatever the newest available version of git is. Or, on your personal Ubuntu machine, run&lt;br /&gt;
&lt;br /&gt;
  $ sudo apt-get install git-lfs&lt;br /&gt;
&lt;br /&gt;
to install the necessary package. Whatever computer you are using, then run&lt;br /&gt;
&lt;br /&gt;
  $ git lfs install&lt;br /&gt;
&lt;br /&gt;
to inform git about the new LFS addon. This command only needs to be run once on each machine you intend to clone the software repository on. If you get the error message&lt;br /&gt;
&lt;br /&gt;
  Error: failed to call git rev-parse --git-dir: exit status 128 : fatal: .git&lt;br /&gt;
&lt;br /&gt;
then try&lt;br /&gt;
&lt;br /&gt;
  $ git lfs install --skip-repo&lt;br /&gt;
&lt;br /&gt;
instead. After you have cloned the repository, you should inspect the LFS files to make sure the clone worked correctly. You can see a list of the files in .gitattributes in the repository root directory. A good file to check might be THESES/PHD/ChrisWhittlestonPhD.pdf. If this file is a PDF of several megabytes, then the LFS succeeded. If it is a small plaintext file containing a URL, the LFS clone did not succeed.&lt;br /&gt;
&lt;br /&gt;
If you have already cloned the repository before installing the LFS addon, you will need to clone again (a pull will not suffice).&lt;br /&gt;
&lt;br /&gt;
==Initial Checkout==&lt;br /&gt;
&lt;br /&gt;
You need to fetch the repository. In Git terms, this is called cloning. You should only need to carry out this step once for each repository. In your favourite web browser, navigate to the project page on Gitlab, eg. [https://gitlab.developers.cam.ac.uk/ch/wales/softwarewales software]. Spot the blue button on the right labelled &#039;Clone&#039; and click on it. Copy the link under &#039;Clone with SSH&#039; to the clipboard (don&#039;t use the &#039;clone with HTTPS&#039; link, or you will have to type your username and password every time). In a terminal, choose a suitable location, like your home directory and change to there. Now type&lt;br /&gt;
&lt;br /&gt;
  $ git clone git@gitlab.developers.cam.ac.uk:ch/wales/softwarewales.git&lt;br /&gt;
&lt;br /&gt;
replacing the address with what you just copied. Git will download the repository. Once it has finished, check that you now have lots of new directories with the contents of the repository.&lt;br /&gt;
&lt;br /&gt;
You should also tell Git your name and email address. Git will record these in the commit logs so other users will know who to complain to when a commit breaks everything. Run&lt;br /&gt;
&lt;br /&gt;
  $ git config --global user.name &amp;quot;An Other&amp;quot;&lt;br /&gt;
  $ git config --global user.email &amp;quot;ao123@cam.ac.uk&amp;quot;&lt;br /&gt;
&lt;br /&gt;
replacing the name and email address in quotes as appropriate.&lt;br /&gt;
&lt;br /&gt;
==Submodules==&lt;br /&gt;
&lt;br /&gt;
Our software repository has become quite large as external potentials have been added. Submodules offer a way to compartmentalise the repository and speed up clones and updates. Self-contained third-party potentials are good candidates for splitting off into submodules. We will use GDML (Gradient-Descent Machine Learning) as an example. On an initial checkout, you will see a GDML/ directory in the root of the repository, but the directory will be empty. Most users do not need GDML, so will not care or even be particularly aware that GDML has not be cloned.&lt;br /&gt;
&lt;br /&gt;
Let us suppose that you actually need GDML. You can tell git that it is required by running&lt;br /&gt;
&lt;br /&gt;
  $ git submodule init GDML&lt;br /&gt;
&lt;br /&gt;
Then the next time you run&lt;br /&gt;
&lt;br /&gt;
  $ git submodule update&lt;br /&gt;
&lt;br /&gt;
the contents of GDML will be checked out. If GDML is subsequently updated, run the update command again to get the newest version. If at some later point you decide that you have finished with GDML and no longer need it checked out, run&lt;br /&gt;
&lt;br /&gt;
  $ git submodule deinit GDML&lt;br /&gt;
&lt;br /&gt;
and the GDML directory will be emptied. If the submodule you are interested in has its own submodules, add the --recursive flag to the commands. If you decide that you want all the submodules run&lt;br /&gt;
&lt;br /&gt;
  $ git submodule update --init --recursive&lt;br /&gt;
&lt;br /&gt;
and all submodules will be checked out. This command is not recommended and you must have a good reason why you need all the submodules before you consider running it.&lt;br /&gt;
&lt;br /&gt;
===Creating a new submodule===&lt;br /&gt;
&lt;br /&gt;
If you are creating an interface to a new large external potential, it may be appropriate to add the external potential as a submodule. It is appropriate if the potential is quite large (more than a few MB), is being placed in the root of the repository, and is not likely to require much in the way of changes after the interface is set up. Instead of adding the external potential to the softwarewales repository, create a new repository under Wales Group on Gitlab and place the files there. Within softwarewales run&lt;br /&gt;
&lt;br /&gt;
  $ git submodule add git@gitlab.developers.cam.ac.uk:ch/wales/my_new_repository.git&lt;br /&gt;
&lt;br /&gt;
where my_new_repository is replaced with whatever you named the new repository. You could also got the appropriate URL by going onto Gitlab for the new repository, clicking the &#039;Clone&#039; button and copying. This command makes the necessary changes to the .gitmodules file, which will then need to be committed and pushed.&lt;br /&gt;
&lt;br /&gt;
Turning an existing subdirectory into a submodule is also possible, but is slightly more complicated and is considered an advanced topic. Google is your friend. Do not mess with the Gitlab repository until you are satisfied you have made the correct adjustments locally.&lt;br /&gt;
&lt;br /&gt;
==Basic Workflow==&lt;br /&gt;
&lt;br /&gt;
Details for specific cases are below, but first, we mention the most important commands that you&#039;ll be running all the time. Imagine you&#039;ve just arrived in the morning and it&#039;s time to start working on  myfile.f90. The first command to run is&lt;br /&gt;
&lt;br /&gt;
  $ git pull&lt;br /&gt;
&lt;br /&gt;
This command contacts the remote repository on GitLab and fetches any commits that people may have made. Run this command frequently, and at least before every commit you make. One notable difference from updating in svn is that git will not merge other people&#039;s changes with files you have changed since your last commit. If other people have changed files you are working on, the pull will fail with an informative message. In this case, run&lt;br /&gt;
&lt;br /&gt;
  $ git stash&lt;br /&gt;
&lt;br /&gt;
which sets aside your local changes. Try the pull again, which should now succeed. Then run&lt;br /&gt;
&lt;br /&gt;
  $ git stash pop&lt;br /&gt;
&lt;br /&gt;
to reapply your local changes to the updated repository. The merge will usually happen automatically, but sometimes you will need to resolve conflicts yourself.&lt;br /&gt;
&lt;br /&gt;
Now you edit myfile.90 and want to commit your changes. Run&lt;br /&gt;
&lt;br /&gt;
  $ git add myfile.f90&lt;br /&gt;
&lt;br /&gt;
Now the file is, in Git terminology, staged for commit. You haven&#039;t committed anything yet. You can add other files to the staging area too. Once your commit is ready, run&lt;br /&gt;
&lt;br /&gt;
  $ git commit -m &amp;quot;Informative message.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Replace &#039;Informative message&#039; with a brief message describing what changes are in your commit. At this point, you have updated your local repository and entered a commit in the permanent record. However, the commit hasn&#039;t gone to GitLab yet. To send it to GitLab (called the remote by Git), run&lt;br /&gt;
&lt;br /&gt;
  $ git push&lt;br /&gt;
&lt;br /&gt;
You can send multiple commits at once. This workflow should encourage you to commit often. Maybe you write a new function. Put in a commit. Then you add some stuff to keywords.f90 for the new functionality. Do another commit. Next you find a bug and fix it. Do another commit.&lt;br /&gt;
&lt;br /&gt;
Note that the remote can function as the backup of your work. Therefore you should probably push any new commits at least as often as the end of each day.&lt;br /&gt;
&lt;br /&gt;
==Checking out an older version of the master branch==&lt;br /&gt;
&lt;br /&gt;
There are all sorts of useful git tools for checking how individual files and branches have changed.&lt;br /&gt;
See the documentation for git diff for example. To check out the master branch current at a given date you can use:&lt;br /&gt;
&lt;br /&gt;
git checkout `git rev-list -1 --before=&amp;quot;Feb 11 2024&amp;quot; master`&lt;br /&gt;
&lt;br /&gt;
==Writing a Paper==&lt;br /&gt;
&lt;br /&gt;
Writing a paper is slightly simpler than editing the group code (Discuss...) because we aren&#039;t worrying about multiple branches. Each paper is a separate repository. To start a new paper, go to Gitlab and create a new repository by clicking the blue &#039;New Project&#039; button on your home screen. Create a blank project and make sure the project URL indicates it is under ch/wales rather than in your user space. Checkout the new repository and start writing the paper in the blank directory. Each session of editing should involve&lt;br /&gt;
&lt;br /&gt;
# git pull&lt;br /&gt;
# make some edits&lt;br /&gt;
# git pull&lt;br /&gt;
# git add all the edited files&lt;br /&gt;
# git commit with a helpful message&lt;br /&gt;
# git push&lt;br /&gt;
&lt;br /&gt;
Simples. All authors will be editing the same branch (the master branch), so you&#039;ll see other authors&#039; updates straight away. This approach keeps things easy, but if two authors are working at exactly the same time, there may be some merging to do. Reduce the amount of merging by committing, pulling and pushing often.&lt;br /&gt;
&lt;br /&gt;
Do not add intermediate LaTeX files (.aux, .log, etc.) to the repository. Do not add your .dvi/.ps/.pdf documents either (except perhaps for proofs created by the journal). When it comes to revisions and resubmissions, do not create a new subdirectory for the new version. Git keeps the whole history so it is always possible to revert to a previous version.&lt;br /&gt;
&lt;br /&gt;
You will also want to checkout the [https://gitlab.developers.cam.ac.uk/ch/wales/bib bibliography repository] but you only need one copy of this, not one for each paper. A suitable directory structure might be to have a papers/ directory under your home directory that contains a directory for the bibliography repository and a directory for each paper you are currently working on.&lt;br /&gt;
&lt;br /&gt;
==Working on the group code==&lt;br /&gt;
&lt;br /&gt;
You&#039;ve just been talking to David and you&#039;ve come up with an exciting new feature to add to GMIN. It&#039;s going to take several days of coding, during which you&#039;ll want to back up your work on the remote, but you don&#039;t want to interfere with other people using GMIN. The solution is to create a new branch. A branch is your own version controlled copy of the code that you can edit at will without messing GMIN up for anyone else. All development should occur on branches. To create a new branch, run&lt;br /&gt;
&lt;br /&gt;
  $ git checkout -b exciting_feature&lt;br /&gt;
&lt;br /&gt;
This command both creates the branch and switches your working copy to it. Initially, your new branch is the same as the master branch you cut it from. However, the branch does not yet exist on GitLab. To create it, run&lt;br /&gt;
&lt;br /&gt;
  $ git push --set-upstream origin exciting_feature&lt;br /&gt;
&lt;br /&gt;
Now go ahead and edit files, making commits and pushing them to GitLab frequently.&lt;br /&gt;
&lt;br /&gt;
When your feature is complete and you have checked it works and that you haven&#039;t broken anything else, it&#039;s time to get it into the master branch. Several steps are required. Firstly, it&#039;s quite likely that other people have changed master since you cut off your branch. You need to test that your changes function with the new changes to master, so first you need to merge in master:&lt;br /&gt;
&lt;br /&gt;
  $ git checkout master&lt;br /&gt;
  $ git pull&lt;br /&gt;
  $ git checkout exciting_feature&lt;br /&gt;
  $ git merge master&lt;br /&gt;
  $ git push&lt;br /&gt;
&lt;br /&gt;
The pull commands makes sure that your local copy of the repository is up to date. The merge command merges changes that have been made to the master branch to your branch. It creates new commits, that you then push to the remote of your branch.&lt;br /&gt;
&lt;br /&gt;
Most users do not have permission to edit the master branch. To get your new feature in, you have to create a merge request. Go to the project page on GitLab. From the drop down menu of branches, select exciting_feature. Click the blue &#039;Create merge request&#039; button in the top right. You now have a page in which you can give your merge request a title and description. You should assign the request to yourself and anyone else who has worked on this branch. Choose the person who is going to review for you. Make sure you tick the boxes to delete the feature branch after the request is accepted. These options help keep the remote repository and history clean. You can edit the commit message for the one commit that will be created: by default it will be the name of your branch. The person you select as reviewer will get a notification and a copy of your changes. They will look through your changes to make sure they follow the group coding standards ([https://wikis.ch.cam.ac.uk/wales/wiki/index.php/Wales_Group_Fortran_conventions_for_group_software here]) and that you haven&#039;t broken anything. If there are any issues, they may request that you make some changes, which you can then commit to the exciting_feature branch. The pull request will be updated and the reviewer will get a notification. However, the reviewer will not be doing extensive testing, so it remains your responsibility to follow the coding standards and make sure everything works. You should make sure your changes compile with nagfor before submitting the merge request, as that is the most particular compiler. Once your code has passed the review, the reviewer will click the Merge button and your branch will be merged into master. Just &#039;Approve&#039; from the reviewer won&#039;t usually be enough because you probably don&#039;t have permission to write to Master and hence cannot merge your new branch into master yourself. Once it has been merged, you can clean up your repository with the following commands&lt;br /&gt;
&lt;br /&gt;
  $ git checkout master&lt;br /&gt;
  $ git pull&lt;br /&gt;
  $ git branch -d exciting_feature&lt;br /&gt;
&lt;br /&gt;
These commands switch your working copy back to master, update your local copy of the repository, and delete the branch you made. You might like to check that your new feature is in the master branch files before deleting your branch. If something has gone wrong and you delete your branch before the changes are in master, it is possible to recover, as the commits won&#039;t actually be deleted from the remote for a few weeks, but the recovery is an advanced topic that is best avoided.&lt;br /&gt;
&lt;br /&gt;
===Large Files===&lt;br /&gt;
&lt;br /&gt;
If any new file you are adding is large (&amp;gt;10MB), it should be stored on git LFS, rather than as a normal file. Fortunately, this is easy to do for new files. If you have already committed a large file and would now like to change it, you have a very complicated process ahead. The best instructions the author could find when doing this in the initial repository migration were here https://stackoverflow.com/questions/60995429/gradually-converting-a-repo-to-use-git-lfs-for-certain-files-one-file-at-a-time in the question, with the caveat that the bfg utility does not work and it was necessary to use git filter-branch as described here https://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository instead. Note this process will delete the history of your existing file and it will only appear from the most recent commit. It may be possible to adjust this with a git rebase, but the author has not investigated.&lt;br /&gt;
&lt;br /&gt;
Anyway, if you haven&#039;t yet committed your large file, you simply need to run&lt;br /&gt;
&lt;br /&gt;
  $ git lfs track &amp;quot;&amp;lt;path-to-file&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
which informs git that this file is to be uploaded using LFS. This command edits the file .gitattributes, which will also need to be added to your commit. If you make a mistake, you can edit .gitattributes manually. You can see some examples as well as the current list of files that are uploaded with LFS in .gitattributes. As you can see from inspecting the file, it is also possible to use wildcards (&#039;*&#039;) to specify multiple files at once. Be careful with your rules though: the rule is applied over all files, so if you add a rule for &#039;myfile.f90&#039; and somewhere else in the repository there is another file with the same name, it will now also be uploaded with LFS. This can be useful for specifying, for example, all .mp4 files in the whole repository, with &amp;quot;*.mp4&amp;quot;. However, if you really want just a specific file, specify the path from the repository root, for example &amp;quot;OPTIM/source/myfile.f90&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==Useful commands to know==&lt;br /&gt;
&lt;br /&gt;
  $ git status&lt;br /&gt;
&lt;br /&gt;
At any point, this command will show you what branch you are on, what files you have modified and staged and your local position compared to the remote. Use it often.&lt;br /&gt;
&lt;br /&gt;
  $ git branch&lt;br /&gt;
&lt;br /&gt;
Display a list of all the current branches.&lt;br /&gt;
&lt;br /&gt;
  $ git diff&lt;br /&gt;
&lt;br /&gt;
Show the differences between your working copy and the last commit, for all files. Add a file name to show only the differences for a specific file.&lt;br /&gt;
&lt;br /&gt;
  $ git log&lt;br /&gt;
&lt;br /&gt;
Display the commit history. Add a file or directory name afterwards to only show the commits that affected that file, or any file in the directory.&lt;br /&gt;
&lt;br /&gt;
  $ git reset HEAD myfile.f90&lt;br /&gt;
&lt;br /&gt;
Unstage myfile.f90 that you accidentally staged for the next commit, but actually don&#039;t want to commit just yet. The working copy of the file is not altered.&lt;br /&gt;
&lt;br /&gt;
  $ git checkout -- myfile.f90&lt;br /&gt;
&lt;br /&gt;
Revert myfile.f90 that you&#039;ve completely messed up to what it was at the last commit. Changes to your working copy are lost.&lt;br /&gt;
&lt;br /&gt;
  $ git reset --hard&lt;br /&gt;
&lt;br /&gt;
Throw away all working and staged changes, reverting the current state to the last commit.&lt;br /&gt;
&lt;br /&gt;
  $ git reset --hard 909a3cac63ae8782b258ebb8c27af361b555bff6&lt;br /&gt;
&lt;br /&gt;
Throw away all working and staged changes, reverting the current state to that of the commit specified. The long hex number is a commit hash. It is not human readable, but you can copy the relevant one from the commit log.&lt;br /&gt;
&lt;br /&gt;
  $ git clean -f&lt;br /&gt;
&lt;br /&gt;
Throw away all untracked files. They will be deleted. Run with -n rather than -f to see which files would be deleted, but without actually doing anything.&lt;br /&gt;
&lt;br /&gt;
  $ git fetch -p &amp;amp;&amp;amp; for branch in $(git branch -vv | grep &#039;: gone]&#039; | awk &#039;{print $1}&#039;); do git branch -D $branch; done&lt;br /&gt;
&lt;br /&gt;
Delete all local branches that do not exist on GitLab. This command is useful to periodically clean up local branches after they have been merged and deleted on GitLab. Warning: do not run this command if you&#039;ve created a new local branch and not yet pushed it to GitLab.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Allowing_read_access_to_your_directories&amp;diff=1814</id>
		<title>Allowing read access to your directories</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Allowing_read_access_to_your_directories&amp;diff=1814"/>
		<updated>2024-03-16T16:03:49Z</updated>

		<summary type="html">&lt;p&gt;Dw34: Created page with &amp;quot;Some of the clusters have permissions set on home directories that do not allow group read access. To change the access recursively use:  chmod -R g+r /home/&amp;lt;your crsid&amp;gt;  To m...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some of the clusters have permissions set on home directories that do not allow group read access.&lt;br /&gt;
To change the access recursively use:&lt;br /&gt;
&lt;br /&gt;
chmod -R g+r /home/&amp;lt;your crsid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To make all the subfolders accessible, you also need to add the executable bit.  To do this for directories only use&lt;br /&gt;
&lt;br /&gt;
find /home/&amp;lt;your crsid&amp;gt; -type d -exec chmod g+x {}&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1797</id>
		<title>Compiling Wales Group codes using cmake</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Compiling_Wales_Group_codes_using_cmake&amp;diff=1797"/>
		<updated>2022-10-31T20:38:48Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* Debugging compilation problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.cmake.org/ CMake] (Cross-platform Make) provides a simple, platform independent way for us to compile and test the group codebase. Dependencies are handled automatically, compilation can proceed in parallel to avoid long waits while testing changes and builds are done entirely outside of the source directory. It also enables us to use the [[Jenkins CI]] &#039;build bot&#039; system to automatically compile and test the code on a nightly basis - helping us catch troublesome commits before they affect other users. &lt;br /&gt;
&lt;br /&gt;
Although everything below refers to compiling [[GMIN]] with the Intel &#039;&#039;ifort&#039;&#039; compiler and AMBER9 - the exact same procedure works for [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
&lt;br /&gt;
Note that not every option for our codes is expected to actually compile with every compiler, for example, anything using CHARMM35/36 will not compile with &#039;&#039;nagfor&#039;&#039; or &#039;&#039;gfortran&#039;&#039;. This is nothing to do with our code - it&#039;s a CHARMM issue. You can get an idea for what should work by looking at the automated [[Jenkins CI]] builds.&lt;br /&gt;
&lt;br /&gt;
==Preparing to compile==&lt;br /&gt;
Before you get started, you need to ensure that the machine you are planning to compile on has cmake 2.8 or higher installed. You can check the current version like so:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmake --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The clusters have a module for cmake 3.0, which you can load using the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You also need to create a directory to build the code in. We suggest that you create a directory for the compiler you are using within the program directory, under a subdirectory called &#039;builds&#039; - for example for compiling GMIN with ifort, you would make a directory here:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
cd ~/softwarewales/GMIN/builds/ifort&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can call these directories whatever you like - but make sure it is clear to you what they contain! You might also want to check which version of the compiler you have loaded. This is important as the different clusters and workstations may have different default versions loaded, some of which might not work properly. You can check the compiler version currently loaded using the same &#039;--version&#039; flag we used for &#039;&#039;cmake&#039;&#039; above:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ifort --version&lt;br /&gt;
ifort (IFORT) 12.1.3 20120212&lt;br /&gt;
Copyright (C) 1985-2012 Intel Corporation.  All rights reserved.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load a different compiler, you can use the &#039;&#039;module load&#039;&#039; or &#039;&#039;module swap&#039;&#039; commands. A list of all available modules can be accessed using:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module av&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are having problems compiling, one of the first things to check is whether it works with a different version of the compiler!&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: When compiling GMIN, if you are getting the error that there is no implicit type for ERFC in ewald.f90, try using a newer version of your compiler. This should be the built-in complementary error function.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling using the ccmake GUI interface to set options==&lt;br /&gt;
[[Image:Ccmake.png|thumb|ccmake set up to compile A9GMIN|200px|right]]&lt;br /&gt;
&lt;br /&gt;
One advantage using cmake has over make is that we can use the simple ccmake GUI. This interface lets us set options like compiling with AMBER9, or CHARMM35, toggle between &#039;Release&#039; and &#039;Debug&#039; builds (see below) - and examine and alter the flags being uses for the compilation if we wish. Before we can run ccmake, we need to specify the compiler and run cmake in our build directory (e.g. softwarewales/GMIN/builds/ifort). We specify the &#039;&#039;&#039;F&#039;&#039;&#039;ortran &#039;&#039;&#039;C&#039;&#039;&#039;ompiler by setting the &#039;&#039;&#039;$FC&#039;&#039;&#039; environment variable (in this case the Intel Fortran compiler, ifort), and then run &#039;&#039;cmake&#039;&#039; (on the command line), passing it the relative location of the [[GMIN]] source directory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake ../../source&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you run &#039;&#039;ls&#039;&#039;, you will see some cmake files have been generated:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
CMakeCache.txt  CMakeFiles  cmake_install.cmake  Makefile  modules&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can now run &#039;&#039;ccmake&#039;&#039; to open the GUI:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ccmake .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To navigate between options, use the arrow keys. Options can be toggled by pressing Return. To compile [[GMIN]] with AMBER9 (A9GMIN), we need to toggle the &#039;&#039;WITH_AMBER&#039;&#039; option &#039;&#039;ON&#039;&#039;. Once you have done this, you need to configure and generate appropriate cmake info. This is done by pressing &#039;c&#039; to configure, &#039;e&#039; to exit and then &#039;g&#039; to generate.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: for some builds (CHARMM with DFTB and CUDAGMIN), you might need to configure, exit and generate twice to set all necessary options&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can now compile A9GMIN in parallel as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;-j8&#039; flag here tells make to use up to 8 &#039;threads&#039; when building. For optimal performance, you should keep this slightly greater than the number of cores (CPUs) the node you are working on has. If all goes well, you should now have an A9GMIN binary in your build directory - congratulations! &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Linking Fortran executable A9GMIN&lt;br /&gt;
[100%] Built target A9GMIN&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------------------- 15:23:45&lt;br /&gt;
&lt;br /&gt;
csw34@sinister:~/softwarewales/GMIN/builds/ifort&amp;gt; ls&lt;br /&gt;
A9GMIN          cmake_install.cmake   libcudadummylib.a  libmylapack.a  NAB&lt;br /&gt;
AMBER           display_version.f90   libdummylib.a      Makefile       nab_binaries_built&lt;br /&gt;
CMakeCache.txt  GMIN                  libgminlib.a       modules        porfuncs.f90&lt;br /&gt;
CMakeFiles      libamber12dummylib.a  libmyblas.a        n&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Plain [[GMIN]] is also built at the same time should you need it. You can move this into your ~/bin directory if you like, or anywhere else in your &#039;&#039;$PATH&#039;&#039; to make running it simple.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: If you want to use OPTIM with the new C++ implementation of the NEB routine, you will need to obtain the source code for that separately. See [https://wikis.ch.cam.ac.uk/wales/wiki/index.php/OPTIM here] for instructions.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Compiling by setting options on the command line==&lt;br /&gt;
If you know the options you&#039;d like to set already (you can see them all in ccmake), you can save some time by passing them directly to &#039;&#039;cmake&#039;&#039; on the command line, bypassing the need for &#039;&#039;ccmake&#039;&#039;. For example, to compile A9GMIN (GMIN with the AMBER9 interface) using the Intel ifort compiler, you would run the following commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &#039;../../source&#039; is the relative location of the GMIN source directory. You can find some more examples of compiling from the command line below.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: Sometimes you may get error&#039;&#039;&#039; (for example, Fatal Error: Can&#039;t open module file &#039;someModule.mod&#039; for reading at (1): No such file or directory) when following this procedure. In that case there are three things you could try: make sure you are building in a new directory, if that does not help run `make VERBOSE=1` instead of `make -j8` or simply switch to using ccmake.&lt;br /&gt;
&lt;br /&gt;
==Compiling with MPI==&lt;br /&gt;
To compile with MPI support add the following flags when running cmake on the command line:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=mpif90 CC=mpicc cmake ../source -DCOMPILER_SWITCH=pgi -DWITH_MPI=yes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Here -DCOMPILER_SWITCH=pgi assumes you&#039;re using the Portland &#039;&#039;pgi&#039;&#039; compiler. Make sure you have the correct modules loaded (in this case &#039;&#039;pgi&#039;&#039; and &#039;&#039;mpi-pgi&#039;&#039;), and that the particular mpi you want (in this case &#039;&#039;mpi-pgi&#039;&#039;) is listed before any other mpi&#039;s loaded (so that it has the highest priority). The modules can be loaded by typing:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load pgi/64/&lt;br /&gt;
module load mpi/openmpi/pgi/64/1.6.3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and you can check which modules are loaded and in which order/priority by the &#039;&#039;module list&#039;&#039; command. You may need to &#039;&#039;module unload &amp;lt;name&amp;gt;&#039;&#039; any other mpi&#039;s that are higher up in the list than the one you want. You can of course set the COMPILER_SWITCH and WITH_MPI flags in &#039;&#039;ccmake&#039;&#039; if you prefer.&lt;br /&gt;
&lt;br /&gt;
Note: It has been observed that pgi/64/15.1 leads to compilation errors, and for now, it is best to use pgi/64/14.9&lt;br /&gt;
&lt;br /&gt;
==Advanced mode - changing compiler flags with ccmake==&lt;br /&gt;
[[Image:Ccmakeadvanced.png|thumb|ccmake advanced mode|200px|right]]&lt;br /&gt;
&lt;br /&gt;
Although initially the &#039;&#039;ccmake&#039;&#039; GUI looks very simple, there is a lot going on under the hood. By pressing &#039;t&#039; you can enter &#039;Advanced mode&#039; which will show you all of the hidden options, for example the compiler flags that are being passed to &#039;&#039;make&#039;&#039; when you compile the code. You can also make changes to the flags here, for example if you would like to add &#039;-p&#039; to do  profiling. &lt;br /&gt;
&lt;br /&gt;
As with changing the build type, you simply select the field you&#039;d like to change using the arrow keys, press Return, make your changes and press Return again to save them. When you subsequently configure and generate as above, those altered flag will be used for the subsequent compilation.&lt;br /&gt;
&lt;br /&gt;
Note that these changes only apply in the build directory in which you make them.&lt;br /&gt;
&lt;br /&gt;
==Debugging runtime problems using gdb or valgrind==&lt;br /&gt;
If you are getting a segmentation fault, crash or other unexpected behaviour, you might want to run your job through a debugger like [http://www.gnu.org/software/gdb/ gdb] or [http://valgrind.org/ valgrind]. In order to maximise your chances of getting useful output, you should build a &#039;Debug&#039; version of the program you are having trouble. To do this, you can either change the &#039;&#039;CMAKE_BUILD_TYPE&#039;&#039; in &#039;&#039;ccmake&#039;&#039; to &#039;Debug&#039; (press Return, change &#039;Release&#039; to &#039;Debug&#039; and press Return again), or on the command line like so for GMIN with AMBER 9 using the Intel ifort compiler:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FC=ifort cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_AMBER=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can then run the binary &#039;&#039;through&#039;&#039; gdb or valgrind as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gdb A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
valgrind A9GMIN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I won&#039;t cover debugging with these tools here as it&#039;s a science in itself! Do some Googling and ask for help as needed :)&lt;br /&gt;
&lt;br /&gt;
==Debugging compilation problems==&lt;br /&gt;
There are many ways to try and track down why your code is not compiling. Before you start changing compilers, building a &#039;Debug&#039; version or changing machines, you might want to try running make again with the &#039;&#039;VERBOSE&#039;&#039; option enabled. This will dump a lot of potentially useful output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
VERBOSE=1 make&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One possible gotcha: all .f and .f90 files in the relevant source directories will be compiled and added to a library. This is quite different from the old Makefile way of doing things, where source files were explicitly specified for compilation (via their corresponding .o file). So, if you are testing something by for instance copying code.f90 to code.myhack.f90 and code.orig.f90, then slightly editing a line or two of code.myhack.f90 and copying it back to code.f90 for use, this will probably cause linking problems due to multiply-defined subroutines (from all three files). The solution, if you must have alternative versions of the same file hanging round, is to differentiate the filenames by a suffix AFTER the .f[90] .&lt;br /&gt;
 &lt;br /&gt;
Another occasional issue is the unexplained compiler bug - a problem with the version of the compiler you happen to be using. You can can an idea for which compiler versions we expect to work by checking the Jenkins build-bot output, as described in the &#039;Seeing console output&#039; section of the [[Jenkins CI]] page. If you are using a different version of the compiler in question, consider swapping to the version Jenkins is using with &#039;module swap&#039;.&lt;br /&gt;
&lt;br /&gt;
If the error message you are getting doesn&#039;t make sense to you after some Googling, go and ask someone - we all have these problems. Things you can try first include trying a different compiler version, or an entirely different compiler e.g. pgi rather than ifort for example. You should bear in mind that as mentioned above, not all versions of each code will compile with every compiler. Make sure you&#039;re not trying to build something that isn&#039;t expected to work.&lt;br /&gt;
&lt;br /&gt;
To build the executables with the QUIP interface, it may be necessary to do make clean in the QUIP directory.&lt;br /&gt;
&lt;br /&gt;
==Extra command line build examples==&lt;br /&gt;
The below commands are absolutely not an exhaustive list, but should give you an idea of what is possible. You can use &#039;&#039;ccmake&#039;&#039; as described above to discover which variables (e.g. WITH_AMBER) can be manipulated on the command line like this. All of these examples assume your git repository is set up in &#039;&#039;/home/CRSID/softwarewales&#039;&#039; - make the appropriate modifications if you have it elsewhere.&lt;br /&gt;
&lt;br /&gt;
===GMIN===&lt;br /&gt;
&#039;&#039;&#039;A12GMIN&#039;&#039;&#039; (GMIN with AMBER12) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_amber12&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER12=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35GMIN&#039;&#039;&#039; (GMIN with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAGMIN&#039;&#039;&#039; (GMIN leveraging GPU minimisation via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_cuda&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBGMIN&#039;&#039;&#039; (GMIN with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/GMIN/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===OPTIM===&lt;br /&gt;
&#039;&#039;&#039;A9OPTIM&#039;&#039;&#039; (OPTIM with AMBER9) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_amber&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort cmake -DWITH_AMBER9=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;C35OPTIM&#039;&#039;&#039; (OPTIM with CHARMM 35) using pgi:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/pgi_charmm35&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake -DWITH_CHARMM35=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;CUDAOPTIM&#039;&#039;&#039; (OPTIM leveraging GPU via the AMBER 12 interface) using ifort:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cuda/5.5&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_cuda5.5&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc cmake -DWITH_CUDA=1 ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will only work on machines with specific NVIDIA GPUs, for example when submitting jobs on the pat cluster. There is some additional information on the [[Using GMIN and OPTIM with GPUs]] page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DFTBOPTIM&#039;&#039;&#039; (OPTIM with DFTBP) using Intel on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load mkl/64/2022/0/0 cmake/3.23.2 ifort/64/2020/4/304&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/ifort_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
FC=ifort CC=icc CXX=icc cmake ../../source -DWITH_DFTBP=yes&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or using GCC on nest:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load cmake/3.23.2 gcc/12.2.0&lt;br /&gt;
mkdir -p ~/softwarewales/OPTIM/builds/gfortran_dftbp&lt;br /&gt;
cd !$&lt;br /&gt;
cmake ../../source -DWITH_DFTBP=yes -DWITH_MYBLAS=no -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===PATHSAMPLE===&lt;br /&gt;
There are very few options for [[PATHSAMPLE]] as we don&#039;t need to worry about interfacing with a particular potential. As a result, every binary is simply called &#039;&#039;PATHSAMPLE&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Using nagfor (the NAG fortran compiler - check you have the module loaded - very strict!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/nagfor&lt;br /&gt;
cd !$&lt;br /&gt;
FC=nagfor cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using pgi (much more generous with coding slips/non-standard uses):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p ~/softwarewales/PATHSAMPLE/builds/pgi&lt;br /&gt;
cd !$&lt;br /&gt;
FC=pgf90 cmake ../../source&lt;br /&gt;
make -j8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Configuring defaults - for developers==&lt;br /&gt;
&lt;br /&gt;
Fortran compilers and their corresponding default settings are all controlled by the file $SVN/CMakeModules/FindFORTRANCOMPILER.cmake ($SVN is your svn root directory). In particular, we may wish to edit the flags used for each set of compilers and build type. These are contained in the following block:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
   message(&amp;quot;Setting initial values for compiler flags&amp;quot;)&lt;br /&gt;
   if(COMPILER_SWITCH MATCHES &amp;quot;pgi&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-Mextend&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3 -Munroll -Mnoframe&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-Mextend -C -g -gopt -Mbounds -Mchkfpstk -Mchkptr -Mchkstk -Mcoff -Mdwarf1 -Mdwarf2 -Mdwarf3 -Melf -Mpgicoff -traceback&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-Mfree&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;gfortran&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-ffixed-line-length-200 -ffree-line-length-0&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
#      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fimplicit-none -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -fbounds-check -Wuninitialized -O -ftrapv -fno-automatic&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;${CMAKE_Fortran_FLAGS_DEBUG} -fimplicit-none&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-ffree-form&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;nag&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -kind=byte -maxcontin=3000&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-mismatch_all -O4&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-C=all -mtrace=all -gline -g -mismatch_all -ieee=stop&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING) # js850&amp;gt; is this ever used?&lt;br /&gt;
   elseif(COMPILER_SWITCH MATCHES &amp;quot;ifort&amp;quot;)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS &amp;quot;-132 -heap-arrays -assume byterecl&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_RELEASE &amp;quot;-O3&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
# Warnings about temporary argument creation and edit descriptor widths are disabled with the final flags.&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG &amp;quot;-C -g -traceback -debug full -check all,noarg_temp_created -diag-disable 8290,8291&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (CMAKE_Fortran_FLAGS_DEBUG_SLOW &amp;quot;-debug all -check all -implicitnone -warn unused -fp-stack-check -ftrapuv -check pointers -check bounds&amp;quot; CACHE TYPE STRING FORCE)&lt;br /&gt;
      set (FORTRAN_FREEFORM_FLAG &amp;quot;-free&amp;quot; CACHE TYPE STRING)&lt;br /&gt;
   else()&lt;br /&gt;
      message(FATAL_ERROR &amp;quot;unknown comiler switch: ${COMPILER_SWITCH}&amp;quot;)&lt;br /&gt;
   endif()&lt;br /&gt;
    SET(COMPILER_FLAGS_WERE_SET yes CACHE TYPE INTERNAL)&lt;br /&gt;
endif(NOT COMPILER_FLAGS_WERE_SET)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main if/elseif blocks correspond to compiler switches. Inside these, there are the default flags for each of our build types (release, debug and debug_slow), which are configured using ccmake. These can be edited, if we wish to change the default behaviour (e.g. a recent addition of -check all,noarg_temp_created -diag-disable 8290,8291 to disable annoying warning messages for ifort).&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1788</id>
		<title>VMD</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1788"/>
		<updated>2021-10-23T07:33:28Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VMD is a molecular visualization program installed on all workstations and clusters. The official documentation can be found [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html here] with some tutorials [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html here]. Like gnuplot however, the wealth of options means that often it takes a long time to find the one command you need to use so below you will find some useful basic settings/info for using VMD. For producing graphics for publication, [[Pymol]] is probably a better option as it has a built in ray-tracing routine but for general visualization, VMD is much quicker.&lt;br /&gt;
&lt;br /&gt;
It is possible to load most files using command line flags, making loading many frames into different topology files easy.  The -f flag indicates that all subsequent files (until the next -f flag or the end) should be loaded into a single molecule.  There are also flags for selecting different file types (default is .pdb), most commonly parm7 for topology files generated by Amber and rst7 for restart files generated by Amber.  mdcrd files are denoted -crd and periodic mdcrd files -crdbox.&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
        vmd -f first_mol.pdb \&lt;br /&gt;
            -f -parm7 second_mol.prmtop -rst7 second_mol.rst \&lt;br /&gt;
            -f -parm7 third_mol.prmtop -crdbox third_mol_1st_frames.crd -crdbox third_mol_2nd_frames&lt;br /&gt;
&lt;br /&gt;
== Rendering Molecules with a Transparent Background ==&lt;br /&gt;
&lt;br /&gt;
Dumping selected frames from a movie or a single image can be achieved using the render option in the vmd gui.&lt;br /&gt;
Choosing the povray option should produce a vmdscene.pov file. This pov file can be converted to a png with a &lt;br /&gt;
transparent background using povray if you must have a white background in vmd:&lt;br /&gt;
&lt;br /&gt;
povray +W829 +H771 -Ivmdscene.pov -Ovmdscene.pov.tga -Otest +Q11 +J +A +FN +UA&lt;br /&gt;
&lt;br /&gt;
The +W and +H values must be taken from the vmdscene.pov file; they should be given in a comment statement at the top.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Movie Making Tips ==&lt;br /&gt;
To load all frames in one go, select the file type in the &amp;quot;Determine file type&amp;quot; box, and then the button &amp;quot;load all at once&amp;quot;&lt;br /&gt;
will not be greyed out, so you can select it. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vmd movie making seems not work properly with step sizes different from one. The last frame is repeated many times. Instead, the&lt;br /&gt;
frames can be selected using sed:&lt;br /&gt;
&lt;br /&gt;
Try extracting frames first with sed: &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~66087,+62939d&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+62939d deletes lines 1 to 62940, deleting 20 frames &lt;br /&gt;
&lt;br /&gt;
The ~66087 repeats the action every 21 frames. The counter operates on the original line numbers. &lt;br /&gt;
&lt;br /&gt;
This example is for a&lt;br /&gt;
system with 3145 atoms, so each frame is 3147 lines with the xyz header. &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~Y,+Xd&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+Xd deletes lines 1 to X+1, so to select every nth frame for frames of length m you need &lt;br /&gt;
&lt;br /&gt;
X=n*m-1 &lt;br /&gt;
&lt;br /&gt;
and Y=(n+1)*m &lt;br /&gt;
&lt;br /&gt;
m is the number of atoms plus two.&lt;br /&gt;
&lt;br /&gt;
To make the movie pause at the start and finish, just duplicate these end points sufficiently. If there are&lt;br /&gt;
slow portions around local minima try adjusting the energy difference parameter on the PATH line in the OPTIM odata file for intial&lt;br /&gt;
generation of path.xyz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* making movies from a &#039;.pdb&#039; file containing multiple structures. &#039;&#039;This is dealt with in the OPTIM section as part of the tutorial on making a movie of a path&#039;&#039;&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1763</id>
		<title>VMD</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1763"/>
		<updated>2021-08-10T08:53:30Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VMD is a molecular visualization program installed on all workstations and clusters. The official documentation can be found [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html here] with some tutorials [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html here]. Like gnuplot however, the wealth of options means that often it takes a long time to find the one command you need to use so below you will find some useful basic settings/info for using VMD. For producing graphics for publication, [[Pymol]] is probably a better option as it has a built in ray-tracing routine but for general visualization, VMD is much quicker.&lt;br /&gt;
&lt;br /&gt;
It is possible to load most files using command line flags, making loading many frames into different topology files easy.  The -f flag indicates that all subsequent files (until the next -f flag or the end) should be loaded into a single molecule.  There are also flags for selecting different file types (default is .pdb), most commonly parm7 for topology files generated by Amber and rst7 for restart files generated by Amber.  mdcrd files are denoted -crd and periodic mdcrd files -crdbox.&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
        vmd -f first_mol.pdb \&lt;br /&gt;
            -f -parm7 second_mol.prmtop -rst7 second_mol.rst \&lt;br /&gt;
            -f -parm7 third_mol.prmtop -crdbox third_mol_1st_frames.crd -crdbox third_mol_2nd_frames&lt;br /&gt;
&lt;br /&gt;
== Movie Making Tips ==&lt;br /&gt;
To load all frames in one go, select the file type in the &amp;quot;Determine file type&amp;quot; box, and then the button &amp;quot;load all at once&amp;quot;&lt;br /&gt;
will not be greyed out, so you can select it. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vmd movie making seems not work properly with step sizes different from one. The last frame is repeated many times. Instead, the&lt;br /&gt;
frames can be selected using sed:&lt;br /&gt;
&lt;br /&gt;
Try extracting frames first with sed: &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~66087,+62939d&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+62939d deletes lines 1 to 62940, deleting 20 frames &lt;br /&gt;
&lt;br /&gt;
The ~66087 repeats the action every 21 frames. The counter operates on the original line numbers. &lt;br /&gt;
&lt;br /&gt;
This example is for a&lt;br /&gt;
system with 3145 atoms, so each frame is 3147 lines with the xyz header. &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~Y,+Xd&#039; path.xyz &amp;gt; temp &lt;br /&gt;
&lt;br /&gt;
1,+Xd deletes lines 1 to X+1, so to select every nth frame for frames of length m you need &lt;br /&gt;
&lt;br /&gt;
X=n*m-1 &lt;br /&gt;
&lt;br /&gt;
and Y=(n+1)*m &lt;br /&gt;
&lt;br /&gt;
m is the number of atoms plus two.&lt;br /&gt;
&lt;br /&gt;
To make the movie pause at the start and finish, just duplicate these end points sufficiently. If there are&lt;br /&gt;
slow portions around local minima try adjusting the energy difference parameter on the PATH line in the OPTIM odata file for intial&lt;br /&gt;
generation of path.xyz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* making movies from a &#039;.pdb&#039; file containing multiple structures. &#039;&#039;This is dealt with in the OPTIM section as part of the tutorial on making a movie of a path&#039;&#039;&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1762</id>
		<title>VMD</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=VMD&amp;diff=1762"/>
		<updated>2021-08-10T08:42:27Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;VMD is a molecular visualization program installed on all workstations and clusters. The official documentation can be found [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html here] with some tutorials [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html here]. Like gnuplot however, the wealth of options means that often it takes a long time to find the one command you need to use so below you will find some useful basic settings/info for using VMD. For producing graphics for publication, [[Pymol]] is probably a better option as it has a built in ray-tracing routine but for general visualization, VMD is much quicker.&lt;br /&gt;
&lt;br /&gt;
It is possible to load most files using command line flags, making loading many frames into different topology files easy.  The -f flag indicates that all subsequent files (until the next -f flag or the end) should be loaded into a single molecule.  There are also flags for selecting different file types (default is .pdb), most commonly parm7 for topology files generated by Amber and rst7 for restart files generated by Amber.  mdcrd files are denoted -crd and periodic mdcrd files -crdbox.&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
        vmd -f first_mol.pdb \&lt;br /&gt;
            -f -parm7 second_mol.prmtop -rst7 second_mol.rst \&lt;br /&gt;
            -f -parm7 third_mol.prmtop -crdbox third_mol_1st_frames.crd -crdbox third_mol_2nd_frames&lt;br /&gt;
&lt;br /&gt;
== Movie Making Tips ==&lt;br /&gt;
To load all frames in one go, select the file type in the &amp;quot;Determine file type&amp;quot; box, and then the button &amp;quot;load all at once&amp;quot;&lt;br /&gt;
will not be greyed out, so you can select it. &lt;br /&gt;
&lt;br /&gt;
vmd movie making seems not work properly with step sizes different from one. The last frame is repeated many times. Instead, the&lt;br /&gt;
frames can be selected using sed:&lt;br /&gt;
&lt;br /&gt;
Try extracting frames first with sed:&lt;br /&gt;
sed -e &#039;1~66087,+62939d&#039; path.xyz &amp;gt; temp&lt;br /&gt;
1,+62939d deletes lines 1 to 62940, deleting 20 frames&lt;br /&gt;
The ~66087 repeats the action every 21 frames. The counter operates on the original line numbers. This example is for a&lt;br /&gt;
system with 3145 atoms, so each frame is 3147 lines with the xyz header. &lt;br /&gt;
&lt;br /&gt;
sed -e &#039;1~Y,+Xd&#039; path.xyz &amp;gt; temp&lt;br /&gt;
1,+Xd deletes lines 1 to X+1, so to select every nth frame for frames of length m you need X=n*m-1&lt;br /&gt;
and Y=(n+1)*m&lt;br /&gt;
m is the number of atoms plus two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tutorials ==&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* making movies from a &#039;.pdb&#039; file containing multiple structures. &#039;&#039;This is dealt with in the OPTIM section as part of the tutorial on making a movie of a path&#039;&#039;&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Beginner%27s_guide_to_working_in_Wales_group&amp;diff=1591</id>
		<title>Beginner&#039;s guide to working in Wales group</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Beginner%27s_guide_to_working_in_Wales_group&amp;diff=1591"/>
		<updated>2020-04-13T18:16:23Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* How to access and add papers to the group bib? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===How to access and add papers to the group bib===&lt;br /&gt;
*You will need svn to work, which should be transparent if you are logged in to a computer in the department. Whenever asked for a password, enter your admitto password. &lt;br /&gt;
*In the directory where you wish to handle paper reading/writing, create a working copy of the bib directory.&lt;br /&gt;
For example, &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
svn checkout https://svn.ch.cam.ac.uk/svn/wales/groups/djwpapers/bib&lt;br /&gt;
cd bib&lt;br /&gt;
svn update&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* To add a bib entry, make the desired changes and then save them. Make sure that the entry does not&lt;br /&gt;
exist already. Duplicate entries with different bibtex mnemonics cause erroneous entries in the bibliography,&lt;br /&gt;
which are tedious to correct if they are only detected at the proof stage. The group files are an important resource, which can save a great deal of time in paper and thesis writing.&lt;br /&gt;
&lt;br /&gt;
All the other group papers in progress can be found in the djwpapers directory, but be warned that it is rather large because there is a backlog of papers that have appeared, but have yet to be moved to David&#039;s archive. It is probably best to check out individual directories that you are interested in.&lt;br /&gt;
&lt;br /&gt;
The bib entry for a paper can usually be obtained by googling the paper first, find the link &#039;cite this&#039; somewhere, choose the &#039;BibTeX&#039; option when asked to select citation manager/file format, and then download it (or directly open it) to see the bib entry. For naming the keys, please follow the group practice as given in https://wikis.ch.cam.ac.uk/wales/wiki/index.php/Wales_Group_Conventions_when_using_LaTex&lt;br /&gt;
*Commit your changes with a descriptive message so that others know what the commit corresponds to.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
svn commit -m &amp;quot;Your message here&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For further details visit http://subversion.apache.org/quick-start&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Beginner%27s_guide_to_working_in_Wales_group&amp;diff=1590</id>
		<title>Beginner&#039;s guide to working in Wales group</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Beginner%27s_guide_to_working_in_Wales_group&amp;diff=1590"/>
		<updated>2020-04-13T18:13:29Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* How to access and add papers to group bib repository */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
===How to access and add papers to the group bib?===&lt;br /&gt;
*You will need svn to work, which should be transparent if you are logged in to a computer in the department. Whenever asked for a password, enter your admitto password. &lt;br /&gt;
*In the directory where you wish to handle paper reading/writing, create a working copy of the bib directory.&lt;br /&gt;
For example, &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
svn checkout https://svn.ch.cam.ac.uk/svn/wales/groups/djwpapers/bib&lt;br /&gt;
cd bib&lt;br /&gt;
svn update&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* To add a bib entry, make the desired changes and then save them. Make sure that the entry does not&lt;br /&gt;
exist already. Duplicate entries with different bibtex mnemonics cause erroneous entries in the bibliography,&lt;br /&gt;
which are tedious to correct if they are only detected at the proof stage. The group files are an important resource, which can save a great deal of time in paper and thesis writing.&lt;br /&gt;
&lt;br /&gt;
The bib entry for a paper can usually be obtained by googling the paper first, find the link &#039;cite this&#039; somewhere, choose the &#039;BibTeX&#039; option when asked to select citation manager/file format, and then download it (or directly open it) to see the bib entry. For naming the keys, please follow the group practice as given in https://wikis.ch.cam.ac.uk/wales/wiki/index.php/Wales_Group_Conventions_when_using_LaTex&lt;br /&gt;
*Commit your changes with a descriptive message so that others know what the commit corresponds to.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
svn commit -m &amp;quot;Your message here&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For further details visit http://subversion.apache.org/quick-start&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Comprehensive_Contents_Page&amp;diff=1566</id>
		<title>Comprehensive Contents Page</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Comprehensive_Contents_Page&amp;diff=1566"/>
		<updated>2020-01-29T14:48:48Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is designed to organise all of the pages on this wiki, as well as provide other useful links. Note that some pages may appear under more than one heading.&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[Wales Group]] provides good step-by-step instructions. Relevant pages are:&lt;br /&gt;
&lt;br /&gt;
=== Acquiring and compiling the group software ===&lt;br /&gt;
* [[SVN setup]]&lt;br /&gt;
* [[Wales Group Version control]] - to keep the code standardised.&lt;br /&gt;
* Theory Sector [http://wwmm.ch.cam.ac.uk/wikis/cuc3/index.php/SVN_Page SVN Page] - some useful general information on SVN commands.&lt;br /&gt;
* [[Compiling Wales Group codes using cmake]] - CMake (Cross-platform Make) allows us to compile and test the group codebase regardless of platform. This page provides crucial information how to compile using cmake.&lt;br /&gt;
* [[ElaborateDiff]]&lt;br /&gt;
&lt;br /&gt;
=== Maintaining code health ===&lt;br /&gt;
* [[Jenkins CI]] - explains Jenkins, which we use to download our code and compile each of our targets with each of the compilers every night.&lt;br /&gt;
* https://wales-jenkins.ch.cam.ac.uk/ - log for our Jenkins tests.&lt;br /&gt;
* [[Branching and Merging]]&lt;br /&gt;
* [[Cmake interface building]]&lt;br /&gt;
* [[Installing python modules]]&lt;br /&gt;
* [[Revamping the modules system]]&lt;br /&gt;
&lt;br /&gt;
=== Collaborators without access to the SVN repository ===&lt;br /&gt;
For licensing reasons, some code cannot be included in the Wales Group public tarball.&lt;br /&gt;
* http://www-wales.ch.cam.ac.uk/svn.tar.bz2 - Wales group public tarball. Includes [[GMIN]], [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
If a collaborator has a [[CHARMM]] or [[AMBER]] licence, we do maintain separate tarballs which include the [[CHARMM]], [[AMBER]] and [[CHARMM]]+[[AMBER]] source and interfaces. These are not linked anywhere on the website and require a username (&#039;&#039;&#039;wales&#039;&#039;&#039;) and password (&#039;&#039;&#039;group&#039;&#039;&#039;) to download:&lt;br /&gt;
&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/CHARMM/svn.CHARMM.tar.bz2 CHARMM]&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/AMBER/svn.AMBER.tar.bz2 AMBER]&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/both/svn.both.tar.bz2 AMBER+CHARMM]&lt;br /&gt;
&lt;br /&gt;
=== Running on Windows ===&lt;br /&gt;
Not particularly recommended.&lt;br /&gt;
* [[Running Wales Group software on Windows 7]]&lt;br /&gt;
&lt;br /&gt;
== Wales Group Programs ==&lt;br /&gt;
&lt;br /&gt;
=== Programs ===&lt;br /&gt;
* [[GMIN]]: A program for finding global minima and calculating thermodynamic properties from basin-sampling.&lt;br /&gt;
* [[OPTIM]]: A program for optimizing geometries and calculating reaction pathways.&lt;br /&gt;
* [[PATHSAMPLE]]: A driver for OPTIM to create stationary point databases using discrete path sampling and perform kinetic analysis.&lt;br /&gt;
* [[Pele]]: Python energy landscape explorer. A pythonic rewrite of some core functionality of GMIN, OPTIM, and PATHSAMPLE. Can be very useful for visualizing your system and for rapidly implementing and testing new ideas.&lt;br /&gt;
&lt;br /&gt;
=== Curated Examples ===&lt;br /&gt;
* https://github.com/wales-group/examples - set of tutorials detailing how to use GMIN, OPTIM and PATHSAMPLE. Essential for beginners.&lt;br /&gt;
* http://www-wales.ch.cam.ac.uk/VM/Wales_Group_VM.ova - Pre-prepared teaching virtual machine. This contains the code and examples.&lt;br /&gt;
* https://www.virtualbox.org/wiki/Downloads - This is required if using the VM above.&lt;br /&gt;
* https://github.com/wales-group/examples.git - Alternatively, you can run the examples on your own machine. To get hold of the relevant files:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/wales-group/examples.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Useful Notes on Wales Group Programs and Subroutines ==&lt;br /&gt;
=== [[GMIN]] ===&lt;br /&gt;
* [[Adding a model to GMIN]] - rough outline of the subroutines that need to be changed to add a new model to GMIN&lt;br /&gt;
* [[Compiling Wales Group codes using cmake | Compiling GMIN using cmake ]]&lt;br /&gt;
* [[Selecting search parameters for GMIN]]&lt;br /&gt;
* [[Global optimization of biomolecules using CHARMM]]&lt;br /&gt;
* [[Global optimization of biomolecules using AMBER9]]&lt;br /&gt;
* [[Global optimization of biomolecules using AMBER9 with Structural Restraints]]&lt;br /&gt;
* [[Calculating binding free energy using the FSA method]]&lt;br /&gt;
* [[Restarting a GMIN run from a dump file]]&lt;br /&gt;
* [[Using the implicit membrane model IMM1]]&lt;br /&gt;
* [[Running a Go model with the AMHGMIN]]&lt;br /&gt;
* [[Running a G\=o model with the AMHGMIN]]&lt;br /&gt;
* [[Ligand binding-mode searches with HBONDMATRIX]]&lt;br /&gt;
* [[Compiling and using GMIN with QUIP]]&lt;br /&gt;
* [[Using GMIN and OPTIM with GPUs]]&lt;br /&gt;
* [[Using GMIN to generate endpoints]]&lt;br /&gt;
* [[Using GMIN to generate endpoints (CHARMM)]]&lt;br /&gt;
* [[Generating a GMIN Eclipse project]]&lt;br /&gt;
* [[Mutational BH steps]]&lt;br /&gt;
* [[Biomolecules in the energy landscape framework]]&lt;br /&gt;
* [[DMAGMIN setup]]&lt;br /&gt;
* [[Keywords]]&lt;br /&gt;
* [[PYGMIN &amp;amp; DMACRYS]]&lt;br /&gt;
* [[Rotamer moves in AMBER]]&lt;br /&gt;
* [[Python interface for GMIN/OPTIM]]&lt;br /&gt;
&lt;br /&gt;
==== Scripts ====&lt;br /&gt;
* [[makerestart]]: A bash script to automatically set up a GMIN restart run&lt;br /&gt;
* [[progress]] A bash script to tell you the % completion of a GMIN job and give an estimated time remaining&lt;br /&gt;
&lt;br /&gt;
==== Useful info for coding GMIN ====&lt;br /&gt;
* [[Program flow]] - contains information about what the various files in GMIN do and what order they&#039;re called. &lt;br /&gt;
* [[amberinterface]]&lt;br /&gt;
&lt;br /&gt;
==== Projects ====&lt;br /&gt;
* [[GMIN MOVES module]]&lt;br /&gt;
* [[GMIN SANITY module]]&lt;br /&gt;
* [[GMIN TESTS module]]&lt;br /&gt;
* [[CAMSHIFT]]&lt;br /&gt;
&lt;br /&gt;
=== [[OPTIM]] ===&lt;br /&gt;
* [[Adding a model to OPTIM]] - rough outline of the subrounties that need to be changed to add a new model to OPTIM&lt;br /&gt;
* [[Adding partially finished OPTIM stationary points to a PATHSAMPLE database]]&lt;br /&gt;
* [[perm-pdb.py]]: A python program that creates a &#039;&#039;perm.allow&#039;&#039; file for use with [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;br /&gt;
* [[Compiling Wales Group codes using cmake | Compiling OPTIM using cmake ]]&lt;br /&gt;
* [[OPTIM/Q-Chem Tutorial]]&lt;br /&gt;
* [[OPTIM and PY ellipsoids tutorial]]&lt;br /&gt;
* [[OPTIM output files]]&lt;br /&gt;
* [[Minimizing a structure using OPTIM and AMBER9]]&lt;br /&gt;
* [[Minimizing a structure using OPTIM and CHARMM]]&lt;br /&gt;
* [[Creating movies (.mpg) of paths using OPTIM]]&lt;br /&gt;
* [[Performing a normal mode analysis of a biomolecule using OPTIM (AMBER and CHARMM)]]&lt;br /&gt;
* [[Debugging odd transition states in OPTIM]]&lt;br /&gt;
* [[Connecting two minima with a pathway]] - step by step&lt;br /&gt;
* [[Compiling and using OPTIM with QUIP]]&lt;br /&gt;
* [[Running an Gaussian03 interfaced OPTIM job]]&lt;br /&gt;
* [[The effect of calculating less than the maximum number of eigenvalues using ENDHESS n]]&lt;br /&gt;
* [[Biomolecules in the energy landscape framework]]&lt;br /&gt;
* [[BLJ60 example setup]]&lt;br /&gt;
* [[Finding an initial path with OPTIM and starting up PATHSAMPLE]]&lt;br /&gt;
* [[Finding an initial path with OPTIM and starting up PATHSAMPLE (CHARMM)]]&lt;br /&gt;
* [[Python interface for GMIN/OPTIM]]&lt;br /&gt;
* [[Thomson problem in OPTIM]]&lt;br /&gt;
* [[Instanton tunneling and classical rate calculations with OPTIM]]&lt;br /&gt;
* [[Loading OPTIM&#039;s min.data.info files into PATHSAMPLE]]&lt;br /&gt;
* [[common setup problem : No Frequency Warning]]&lt;br /&gt;
&lt;br /&gt;
=== [[PATHSAMPLE]] ===&lt;br /&gt;
* [[Adding a model to PATHSAMPLE]] - rough outline of the subrounties that need to be changed to add a new model to PATHSAMPLE&lt;br /&gt;
* [[Alternatively, making the initial path with PATHSAMPLE itself]]&lt;br /&gt;
* [[Alternatively, making the initial path with PATHSAMPLE itself (CHARMM)]]&lt;br /&gt;
* [[perm-pdb.py]]: A python program that creates a &#039;&#039;perm.allow&#039;&#039; file for use with [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
* [[dijkstra_test.py]]: A python script to test whether the information in pairlist and ts.data connects the A and B set. (If not, PATHSAMPLE will not work without actually exiting.)&lt;br /&gt;
* [[Compiling Wales Group codes using cmake | Compiling PATHSAMPLE using cmake ]]&lt;br /&gt;
* [[IMPORTANT: Using PATHSAMPLE safely on sinister]]&lt;br /&gt;
* [[Adding a model for PATHSAMPLE]]&lt;br /&gt;
* [[List of output files for PATHSAMPLE]]&lt;br /&gt;
* [[Using BHINTERP to find minima between two end points]]&lt;br /&gt;
* [[Finding an initial path between two end points (minima)]]&lt;br /&gt;
* [[Adding partially finished OPTIM stationary points to a PATHSAMPLE database]]&lt;br /&gt;
* [[Optimising a path]]&lt;br /&gt;
* [[Fine tuning UNTRAP]] - ensuring that it picks sensible minima&lt;br /&gt;
* [[Calculating rate constants (GT and fastest path)]]&lt;br /&gt;
* [[Calculating rate constants (SGT, DGT, and SDGT)]]&lt;br /&gt;
* [[Identifying the k fastest paths between endpoints using KSHORTESTPATHS]]&lt;br /&gt;
* [[Removing minima and transition states from the database]]&lt;br /&gt;
* [[Relaxing existing minima with new potential and creating new database]]&lt;br /&gt;
* [[Relaxing existing transition states with new potential and creating new database]]&lt;br /&gt;
* [[If things go wrong...]]&lt;br /&gt;
* [[If you lost file min.data, but still you have points.min]]&lt;br /&gt;
* [[path.info file is not read, causes PATHSAMPLE to die]]&lt;br /&gt;
* [[BLJ60 example setup]]&lt;br /&gt;
* [[When PATHSAMPLE finds a connected path, but using DIJKSTRA 0 fails to find the connected path]]&lt;br /&gt;
* [[Biomolecules in PATHSAMPLE]]&lt;br /&gt;
* [[Biomolecules in the energy landscape framework]]&lt;br /&gt;
* [[Expanding the kinetic transition network with PATHSAMPLE]]&lt;br /&gt;
* [[Expanding the kinetic transition network with PATHSAMPLE (CHARMM)]]&lt;br /&gt;
* [[Finding an initial path with OPTIM and starting up PATHSAMPLE]]&lt;br /&gt;
* [[Finding an initial path with OPTIM and starting up PATHSAMPLE (CHARMM)]]&lt;br /&gt;
* [[Pathsampling short paths]]&lt;br /&gt;
* [[Pathsampling short paths (CHARMM)]]&lt;br /&gt;
* [[Loading OPTIM&#039;s min.data.info files into PATHSAMPLE]]&lt;br /&gt;
* [[Connecting Sub-databases]]&lt;br /&gt;
&lt;br /&gt;
=== [[Notes on MINPERMDIST | MINPERMDIST]] ===&lt;br /&gt;
&lt;br /&gt;
=== [[Quasi-continuous interpolation for biomolecules | QCI]] ===&lt;br /&gt;
&lt;br /&gt;
== Non-Group Software ==&lt;br /&gt;
&lt;br /&gt;
=== [[AMBER]] ===&lt;br /&gt;
Molecular dynamics simulation program and associated force fields.&lt;br /&gt;
* [http://ambermd.org/ AMBER]&lt;br /&gt;
* [http://ambermd.org/tutorials/ AMBER tutorials] - recommended reading for &#039;&#039;&#039;ANYONE&#039;&#039;&#039; using AMBER!&lt;br /&gt;
* [[Notes on AMBER 12 interface]]&lt;br /&gt;
* [[Using AMBER 14 on the GPU and compute clusters]]&lt;br /&gt;
* [[Generating parameters using AMBER&#039;s built in General Forcefield (gaff)]]&lt;br /&gt;
* [[Generating parameters using RESP charges from GAMESS-US]]&lt;br /&gt;
* [[Simple scripts for LEaP to create topology and coordinate files]] &lt;br /&gt;
* [[Preparing an AMBER topology file for a protein system]] - step by step guide&lt;br /&gt;
* [[Setting up]] - step by step guide to prepare and then symmetrise a simple (protein-only) system&lt;br /&gt;
* [[Using Molfacture to edit molecules and add hydrogens]]&lt;br /&gt;
* [[Preparing an AMBER topology file for a protein plus ligand system]] - step by step guide&lt;br /&gt;
* [[Symmetrising AMBER topology files]] - step by step guide for symmetrising a complex protein+ligand system&lt;br /&gt;
* [[Producing a PDB from a coordinates and topology file]] - using &#039;&#039;amdpdb&#039;&#039;&lt;br /&gt;
* [[Running GMIN with MD move steps AMBER]]&lt;br /&gt;
* [[Performing a normal mode analysis of a biomolecule using OPTIM (AMBER and CHARMM)]]&lt;br /&gt;
* [[Evaluating different components of AMBER energy function with SANDER]]&lt;br /&gt;
* [[Mutational BH steps]]&lt;br /&gt;
* [[REMD with AMBER]]&lt;br /&gt;
* [[Performing a hydrogen-bond analysis]]&lt;br /&gt;
* [[Alternatively, making the initial path with PATHSAMPLE itself]]&lt;br /&gt;
* [[Biomolecules in the energy landscape framework]]&lt;br /&gt;
* [[perm-prmtop.py]] - A python program that converts an AMBER9 topology file into one with a symmetrised potential with respect to exchange (updated for AMBER12 and ff14SB).&lt;br /&gt;
* [[Rotamer moves in AMBER]]&lt;br /&gt;
&lt;br /&gt;
=== [[aux2bib]] === &lt;br /&gt;
To generate a bib file containing only the entries cited in a given .tex file from a larger bib or multiple bib files.&lt;br /&gt;
* [https://ctan.org/pkg/bibtools Get script here]&lt;br /&gt;
&lt;br /&gt;
=== [[CamCasp]] ===&lt;br /&gt;
Cambridge package for Calculation of Anisotropic Site Properties&lt;br /&gt;
From Anthony Stone&#039;s website: &#039;CamCASP is a collection of scripts and programs written by Dr Alston Misquitta and myself for the calculation ab initio of distributed multipoles, polarizabilities, dispersion coefficients and repulsion parameters for individual molecules, and interaction energies between pairs of molecules using SAPT(DFT).&#039;&lt;br /&gt;
* [http://www-stone.ch.cam.ac.uk/programs.html CamCASP home]&lt;br /&gt;
* [[CamCASP/Programming]]&lt;br /&gt;
* [[CamCASP/Programming/5/example1]]&lt;br /&gt;
* [[CamCASP/Notes]]&lt;br /&gt;
* [[CamCASP/Bugs]]&lt;br /&gt;
* [[CamCASP/ToDo/diskIO]]&lt;br /&gt;
* [[CamCASP/ToDo/Memory]]&lt;br /&gt;
* [[CamCASP/CodeExamples/DirectAccess]]&lt;br /&gt;
&lt;br /&gt;
=== [[CPMD]] ===&lt;br /&gt;
Implementation of DFT for &#039;&#039;ab-initio&#039;&#039; molecular dynamics.&lt;br /&gt;
* [http://www.cpmd.org/ Home Page]&lt;br /&gt;
* [[CPMDInput]]&lt;br /&gt;
&lt;br /&gt;
=== [[CHARMM]] ===&lt;br /&gt;
Molecular dynamics simulation program and associated force fields.&lt;br /&gt;
* [https://www.charmm.org/charmm/?CFID=65f7b3aa-8037-452a-bcd1-7583dd83a087&amp;amp;CFTOKEN=0 CHARMM]&lt;br /&gt;
* [[Generating pdb, crd and psf for a peptide sequence]]&lt;br /&gt;
* [[Converting between &#039;.crd&#039; and &#039;.pdb&#039;]]&lt;br /&gt;
* [[Calculating energy of a conformation]]&lt;br /&gt;
* [[Calculating molecular properties]]&lt;br /&gt;
* [[Calculating order parameters]]&lt;br /&gt;
* [[CAMSHIFT]]&lt;br /&gt;
* [[Setting up (CHARMM)]] - step by step guide to prepare and then symmetrise a simple (protein-only) system&lt;br /&gt;
* [[If you need to change the number of atoms (e.g. making a united-atom charmm19 .crd file, or if atoms are missing)]]&lt;br /&gt;
* [[Performing a normal mode analysis of a biomolecule using OPTIM (AMBER and CHARMM)]]&lt;br /&gt;
* [[Minimizing a structure using OPTIM and CHARMM]]&lt;br /&gt;
* [[Alternatively, making the initial path with PATHSAMPLE itself (CHARMM)]]&lt;br /&gt;
* [[Expanding the kinetic transition network with PATHSAMPLE (CHARMM)]]&lt;br /&gt;
* [[Finding an initial path with OPTIM and starting up PATHSAMPLE (CHARMM)]]&lt;br /&gt;
* [[Pathsampling short paths (CHARMM)]]&lt;br /&gt;
&lt;br /&gt;
=== [[disconnectionDPS]] ===&lt;br /&gt;
Produces disconnectivity graphs from min.data and ts.data files. This is included in the Wales group public tarball.&lt;br /&gt;
* [[Constructing Free Energy Disconnectivity Graphs]]&lt;br /&gt;
&lt;br /&gt;
=== [[DMACRYS]] ===&lt;br /&gt;
Package which models crystals of rigid molecules.&lt;br /&gt;
* [http://www.chem.ucl.ac.uk/cposs/dmacrys/index.html Home Page]&lt;br /&gt;
* [[DMACRYS interface]]&lt;br /&gt;
* [[DMAGMIN setup]]&lt;br /&gt;
* [[PYGMIN &amp;amp; DMACRYS]]&lt;br /&gt;
&lt;br /&gt;
=== [[GAMESS]] ===&lt;br /&gt;
General &#039;&#039;ab initio&#039;&#039; quantum chemistry package.&lt;br /&gt;
* [https://www.msg.chem.iastate.edu/gamess/ GAMESS]&lt;br /&gt;
&lt;br /&gt;
=== [[Gaussian]] ===&lt;br /&gt;
General purpose package for computational chemistry calculations.&lt;br /&gt;
* [[Running an Gaussian03 interfaced OPTIM job]]&lt;br /&gt;
&lt;br /&gt;
=== [[gnuplot]] ===&lt;br /&gt;
Open source graphing program.&lt;br /&gt;
* [http://www.gnuplot.info/ gnuplot]&lt;br /&gt;
* [[Plotting a quick histogram in gnuplot using the raw data]]&lt;br /&gt;
* [[Plotting data in real time]]&lt;br /&gt;
* [[Linear and non-linear regression in gnuplot]]&lt;br /&gt;
&lt;br /&gt;
=== [[GROMACS]] ===&lt;br /&gt;
Molecular dynamics package.&lt;br /&gt;
* [[Installing GROMACS on Clust]]&lt;br /&gt;
* [http://www.mdtutorials.com/gmx/ External tutorials]&lt;br /&gt;
* [http://www.gromacs.org/Documentation/Tutorials More external tutorials]&lt;br /&gt;
&lt;br /&gt;
=== [[HiRE-RNA]] ===&lt;br /&gt;
High-res course-grained energy model for RNA.&lt;br /&gt;
* [https://pubs.acs.org/doi/10.1021/jp102497y Explanatory Paper]&lt;br /&gt;
&lt;br /&gt;
=== [[latex2html]] ===&lt;br /&gt;
Script which converts latex documents into HTML pages.&lt;br /&gt;
* [https://www.latex2html.org/ Get script here]&lt;br /&gt;
&lt;br /&gt;
=== [[MMTSB-toolset]] ===&lt;br /&gt;
Group of perl scripts which can be used to setup and run energy minimization, structural analysis and MD with CHARMM or AMBER.&lt;br /&gt;
* [http://feig.bch.msu.edu/mmtsb/Main_Page Documentation]&lt;br /&gt;
* [http://www.mmtsb.org/workshops/mmtsb-ctbp_2006/Tutorials/WorkshopTutorials_2006.html External tutorials]&lt;br /&gt;
* [[Installing and setting up the MMTSB toolset]]&lt;br /&gt;
* [[REX (Replica EXchange MD) with the MMTSB-toolset]]&lt;br /&gt;
&lt;br /&gt;
=== [[Simulations using OPEP | OPEP]] ===&lt;br /&gt;
OPEP is a coarse-grained force field providing a potential for proteins and RNA.&lt;br /&gt;
* [http://opep.galaxy.ibpc.fr/ OPEP file generator here]&lt;br /&gt;
* [[Biomolecules in the energy landscape framework]]&lt;br /&gt;
&lt;br /&gt;
=== [[pgprof]] === &lt;br /&gt;
Profiler for portland-compiled codes&lt;br /&gt;
* [[Portland compiler fails trying to allocate an unexpectedly large amount of memory: issue with large arrays]]&lt;br /&gt;
&lt;br /&gt;
=== [[Pymol]] ===&lt;br /&gt;
Molecular visualisation program.&lt;br /&gt;
* [https://pymol.org/2/ PyMOL]&lt;br /&gt;
* [https://pymolwiki.org/index.php/Main_Page PyMOL Community Wiki]&lt;br /&gt;
* [[loading AMBER prmtop and inpcrd files into Pymol]]&lt;br /&gt;
* [[producing sexy ray-traced images]]&lt;br /&gt;
* [[advanced colouring]]&lt;br /&gt;
* [[Installing python modules]]&lt;br /&gt;
* [[PYGMIN &amp;amp; DMACRYS]]&lt;br /&gt;
* [[path2pdb.py]] - A python program to convert &#039;&#039;path.info&#039;&#039; to &#039;&#039;path_all.pdb&#039;&#039; - you can easy visualize your path in VMD :)&lt;br /&gt;
* [[extractedmin2pdb.py]]: A python program to convert &#039;&#039;exctractedmin&#039;&#039; to PDB format&lt;br /&gt;
=== [[VASP]] ===&lt;br /&gt;
OPTIM has an interface to VASP, which is installed on CSD3. In collaboration with Bora Karasulu the interface has been updated to use VASP format POSCAR input files for both single- and double-ended optimisations and path searches. The OPTIM odata file requires a line like&lt;br /&gt;
&lt;br /&gt;
VASP &#039;mpirun -ppn 16 -np 16 /home/bk393/APPS/vasp.5.4.4/with-VTST/bin/vasp_std &amp;gt; vasp.out&#039;&lt;br /&gt;
&lt;br /&gt;
POSCAR files can be visualised using ase, the Atomic Simulation Environment, which can be accessed on volkhan via&lt;br /&gt;
&lt;br /&gt;
module load anaconda/python3/5.3.0 &lt;br /&gt;
&lt;br /&gt;
pip install ase --user&lt;br /&gt;
&lt;br /&gt;
ase-gui POSCAR1.vasp &amp;amp;&lt;br /&gt;
&lt;br /&gt;
which assumes that ~/.input/bin is in your $PATH environment variable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [[VMD]] ===&lt;br /&gt;
Molecular visualisation program.&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/current/ug/ug.html Documentation]&lt;br /&gt;
* [http://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/index.html External tutorials]&lt;br /&gt;
* [[using VMD to display and manipulate &#039;.pdb&#039; files]]&lt;br /&gt;
* [[loading coordinate files into VMD with the help of an AMBER topology file]] e.g. to visualise the results of a GMIN run using AMBER9&lt;br /&gt;
* [[visualising normal modes using VMD and OPTIM]]&lt;br /&gt;
* [[path2pdb.py]]: A python program to convert &#039;&#039;path.info&#039;&#039; to &#039;&#039;path_all.pdb&#039;&#039; - you can easy visualize your path in VMD :)&lt;br /&gt;
* [[path2xyz.py]]: A python program to convert &#039;&#039;path.info&#039;&#039; to &#039;&#039;path_all.xyz&#039;&#039;&lt;br /&gt;
* [[extractedmin2pdb.py]]: A python program to convert &#039;&#039;exctractedmin&#039;&#039; to PDB format&lt;br /&gt;
* [[Useful .vmdrc file]]&lt;br /&gt;
* [[plotGMINms.tcl]]: a tcl script for plotting ellipsoids in VMD.&lt;br /&gt;
* [[VMD script to annotate each frame of a trajectory]]&lt;br /&gt;
&lt;br /&gt;
=== [[xfig]] ===&lt;br /&gt;
Open source vector graphics editor&lt;br /&gt;
* [https://ctan.org/tex-archive/support/epstopdf/ Convert eps to pdf]&lt;br /&gt;
&lt;br /&gt;
=== [[Xmakemol]] ===&lt;br /&gt;
Program for visualising atomic and molecular systems.&lt;br /&gt;
* [https://www.nongnu.org/xmakemol/ XMakemol]&lt;br /&gt;
&lt;br /&gt;
=== [[xmgrace]] ===&lt;br /&gt;
2D plotting tool.&lt;br /&gt;
* [http://exciting-code.org/xmgrace-quickstart Xmgrace]&lt;br /&gt;
&lt;br /&gt;
== Theoretical/Mathematical Notes ==&lt;br /&gt;
&lt;br /&gt;
* [[Density of states and thermodynamics from energy distributions at different temperatures]]&lt;br /&gt;
* [[Ellipsoid.model]]&lt;br /&gt;
* [[Ellipsoid.model.xyz]]&lt;br /&gt;
* [[Ellipsoid.xyz]]&lt;br /&gt;
* [[Gencoords]]&lt;br /&gt;
* [[GenCoords]]&lt;br /&gt;
* [[GenCoords Models]]&lt;br /&gt;
* [[Rotamer moves in AMBER]]&lt;br /&gt;
* [[Thomson problem in OPTIM]]&lt;br /&gt;
&lt;br /&gt;
=== Angle-axis notes ===&lt;br /&gt;
&lt;br /&gt;
* [[Angle-axis framework]]&lt;br /&gt;
* [[Computing normal modes in angle-axis]]&lt;br /&gt;
&lt;br /&gt;
=== Rigid Bodies ===&lt;br /&gt;
&lt;br /&gt;
* [[Automatic Rigid Body Grouping]]&lt;br /&gt;
* [[Rigid body input files for proteins using genrigid-input.py]]&lt;br /&gt;
* [[Local Rigid Body Framework]]&lt;br /&gt;
* [[Local rigid body in OPTIM]]&lt;br /&gt;
&lt;br /&gt;
== Useful Scripts ==&lt;br /&gt;
* [[perm-prmtop.py]]: A python program that converts an AMBER9 topology file into one with a symmetrised potential with respect to exchange (updated for AMBER12 and ff14SB).&lt;br /&gt;
* [[perm-pdb.py]]: A python program that creates a &#039;&#039;perm.allow&#039;&#039; file for use with [[OPTIM]] and [[PATHSAMPLE]].&lt;br /&gt;
* [[path2pdb.py]]: A python program to convert &#039;&#039;path.info&#039;&#039; to &#039;&#039;path_all.pdb&#039;&#039; - you can easy visualize your path in VMD :)&lt;br /&gt;
* [[path2xyz.py]]: A python program to convert &#039;&#039;path.info&#039;&#039; to &#039;&#039;path_all.xyz&#039;&#039;&lt;br /&gt;
* [[dijkstra_test.py]]: A python script to test whether the information in pairlist and ts.data connects the A and B set. (If not, PATHSAMPLE will not work without actually exiting.)&lt;br /&gt;
* [[extractedmin2pdb.py]]: A python program to convert &#039;&#039;exctractedmin&#039;&#039; to PDB format&lt;br /&gt;
* [[colourdiscon.py]]: A python program for sorting input for disconnectivity graphs&lt;br /&gt;
* [[pdb_to_movie.py]]: A python program to create an AMH movieseg file from a PDB file&lt;br /&gt;
* [[makerestart]]: A bash script to automatically set up a GMIN restart run&lt;br /&gt;
* [[progress]] A bash script to tell you the % completion of a GMIN job and give an estimated time remaining&lt;br /&gt;
* [[recommended bash aliases]]&lt;br /&gt;
* [[David&#039;s .inputrc file]]&lt;br /&gt;
* [[Useful .vmdrc file]]&lt;br /&gt;
* [[Density of states and thermodynamics from energy distributions at different temperatures]]&lt;br /&gt;
* [[GenCoords]]: A fortran program to generate coarse grain building blocks and initial coords using a set of geometric models.&lt;br /&gt;
* [[plotGMINms.tcl]]: a tcl script for plotting ellipsoids in VMD.&lt;br /&gt;
See also the SCRIPTS/ directory in the SVN repository!&lt;br /&gt;
* [[Computing CHARMM FF energy using GMIN, MMTSB and CHARMM]] - Computes the Charmm FF energy of the same structure. Useful for cross-validating force field settings in GMIN data file, CHARMM input file and MMTSB options.&lt;br /&gt;
* [[Automatic Rigid Body Grouping]]&lt;br /&gt;
* [[ElaborateDiff]]&lt;br /&gt;
* [[Parameter-scanning script]]&lt;br /&gt;
* [[Pdb to movie.py]]&lt;br /&gt;
* [[VMD script to annotate each frame of a trajectory]]&lt;br /&gt;
&lt;br /&gt;
== Useful links ==&lt;br /&gt;
* [http://www.ch.cam.ac.uk/computing/theory-compute-clusters The Theory Compute Clusters support page]. Contains useful cluster specific information, including example job submission scripts.&lt;br /&gt;
&lt;br /&gt;
* A useful website which contains AMBER (GAFF) and OPLS parameters for small molecules. http://virtualchemistry.org/gmld.php . This could save us lot of time while trying to derive parameters on our own. If you are lucky, the molecule of your interest may already be there in the existing database. The topology files are in GROMACS format but possibly can be converted into AMBER parameter files. (script anyone ?)&lt;br /&gt;
&lt;br /&gt;
* The moving-domain QM/MM method developed by Victor Batista&#039;s group http://gascon.chem.uconn.edu/software. This approach can be used in the derivation of charges for large proteins and nucleic acids, where a full-fledged ONIOM based calculation is comptutationally prohibitive. It has been applied to systems like the Gramicidin ion channel and Photosystem II.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Animated GIF on the group website]]&lt;br /&gt;
* [[Backup strategy]]&lt;br /&gt;
* [[Chain crossing]]&lt;br /&gt;
* [[Computer Office services]]&lt;br /&gt;
* [[Computing values only once]]&lt;br /&gt;
* [[Decoding heat capacity curves]]&lt;br /&gt;
* [[Differences from Clust]]&lt;br /&gt;
* [[Fixing thunderbird links]]&lt;br /&gt;
* [[If you need to change the number of atoms (e.g. making a united-atom charmm19 .crd file, or if atoms are missing)]]&lt;br /&gt;
* [[Intel Trace Analyzer and Collector]]&lt;br /&gt;
* [[LDAP plans]]&lt;br /&gt;
* [[Lapack compilation]]&lt;br /&gt;
* [[Mek-quake Queueing system]]&lt;br /&gt;
* [[Mek-quake initial setup notes]]&lt;br /&gt;
* [[New mek-quake]]&lt;br /&gt;
* [[Maui compilation]]&lt;br /&gt;
* [[Torque and Maui]]&lt;br /&gt;
* [[Mercurial]]&lt;br /&gt;
* [[Migrating to the new SVN server]]&lt;br /&gt;
* [[NECI Parallelization]]&lt;br /&gt;
* [[Optimization tricks]]&lt;br /&gt;
* [[Other IT stuff]]&lt;br /&gt;
* [[Porfuncs Documentation]]&lt;br /&gt;
* [[Progress]]&lt;br /&gt;
* [[Proposed changes to backup and archiving]]&lt;br /&gt;
* [[Rama upgrade]]&lt;br /&gt;
* [[Remastering Knoppix]]&lt;br /&gt;
* [[See unpacked nodes]]&lt;br /&gt;
* [[Tardis scheduling policy]]&lt;br /&gt;
* [[Zippo Sicortex machine]]&lt;br /&gt;
&lt;br /&gt;
== Useful linux stuff ==&lt;br /&gt;
&lt;br /&gt;
===Basics===&lt;br /&gt;
* [[basic linux commands everyone should know!]]&lt;br /&gt;
* [[piping and redirecting output from one command or file to another]] - how to save yourself hours!&lt;br /&gt;
* [[bash loop tricks]]&lt;br /&gt;
* [[bash history searching]]&lt;br /&gt;
&lt;br /&gt;
===Remote access===&lt;br /&gt;
* [[setting up aliases to quickly log you in to a different machine]]&lt;br /&gt;
* [[transfering files to and from your workstation]] -using &#039;&#039;scp&#039;&#039; or &#039;&#039;rsync&#039;&#039;&lt;br /&gt;
* [[using &#039;ssh-keygen&#039; to automatically log you into clusters from your workstation]] (no more typing in your password!)&lt;br /&gt;
* [[mounting sharedscratch locally]]&lt;br /&gt;
&lt;br /&gt;
===Find and replace===&lt;br /&gt;
* [[short &#039;sed&#039; examples]]&lt;br /&gt;
* [[quick guide to awk]]&lt;br /&gt;
* [[short &#039;awk&#039; examples]]&lt;br /&gt;
&lt;br /&gt;
===File manipulation===&lt;br /&gt;
* [[sorting a file by multiple columns]]&lt;br /&gt;
* [[using tar and gzip to compress/uncompress files | using tar and bzip2 to compress/uncompress files]]&lt;br /&gt;
* [[conversion between different data file formats]] -&#039;almost one-line&#039; scripts&lt;br /&gt;
* [[conversion between different image file formats]] - the &#039;&#039;convert&#039;&#039; command&lt;br /&gt;
* [[removing an excessive number of files from a directory - when &#039;rm&#039; just isn&#039;t enough]]&lt;br /&gt;
&lt;br /&gt;
===Cluster queues===&lt;br /&gt;
* [[submitting jobs, interactively or to a cluster queue system]]&lt;br /&gt;
* [[identifying job on a node]] - if you need to kill only one of few running jobs&lt;br /&gt;
* [[a guide to using SLURM to run PATHSAMPLE]]&lt;br /&gt;
* [[a guide to using SLURM to run GPU jobs on pat]]&lt;br /&gt;
&lt;br /&gt;
===Miscellaneous/uncategorised===&lt;br /&gt;
* [[installing packages on your managed CUC3 workstation]]&lt;br /&gt;
* [[running programs in the background]] - so you can use your shell for other things at the same time&lt;br /&gt;
* [[finding bugs in latex documents that will not compile]]&lt;br /&gt;
* [[printing files from the command line using &#039;lpr&#039;]]&lt;br /&gt;
* [[uploading non image files to the wiki]]&lt;br /&gt;
&lt;br /&gt;
== Compiler Flags ==&lt;br /&gt;
&lt;br /&gt;
* [[Compiler Flags]]&lt;br /&gt;
* [[Blacklisting Compilers]]&lt;br /&gt;
* [[Lapack compilation]]&lt;br /&gt;
* [[Pdb to movie.py]]&lt;br /&gt;
* [[Portland compiler fails trying to allocate an unexpectedly large amount of memory: issue with large arrays]]&lt;br /&gt;
&lt;br /&gt;
== SuSE ==&lt;br /&gt;
&lt;br /&gt;
* [[Upgrading destiny]]&lt;br /&gt;
* [[Upgrading sword]]&lt;br /&gt;
* [[SuSE 10.1 workstation image]]&lt;br /&gt;
* [[SuSE 10.2 workstation image]]&lt;br /&gt;
* [[SuSE 10.3 workstation image]]&lt;br /&gt;
* [[SuSE 11.1]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--[[User:adk44|adk44]] 17.00, 9 May 2019 (BST)&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1562</id>
		<title>Wales Group Version control</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1562"/>
		<updated>2020-01-06T10:42:49Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As of 16th July 2008, all the group code is under version control. [[GMIN]], [[OPTIM]] and [[PATHSAMPLE]], along with the [[CHARMM]] and [[AMBER]] code used with them have been added to the repository, along with the official documentation. Changes to the documentation in the repository are automatically applied to the version on the website every night (at midnight) and the RSS feeds below are updated every ten minutes. The next step is to set up the nightly compilation and test suite for each code.&lt;br /&gt;
&lt;br /&gt;
There are instructions on how to set up your SVN details and obtain the group code on the [[SVN setup]] page.&lt;br /&gt;
&lt;br /&gt;
==RSS feeds==&lt;br /&gt;
&lt;br /&gt;
RSS feeds (including diffs of files that have been changed) are generated every 10 minutes from the SVN log files any uploaded to the Wales group web server [http://www-wales.ch.cam.ac.uk/rss/ here].  There is currently one feed per top level directory in the repository:&lt;br /&gt;
&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/trunk_log.xml trunk_log.xml] - contains the logs/diffs for every change to the code&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/gmin_log.xml gmin_log.xml] - only contains the logs/diffs when code in the GMIN directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/pathsample_log.xml pathsample_log.xml] - only contains the logs/diffs when code in the PATHSAMPLE directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/optim_log.xml optim_log.xml] - only contains the logs/diffs when code in the OPTIM directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/charmm_log.xml charmm_log.xml] - only contains the logs/diffs when code in the CHARMM31 directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/amber_log.xml amber_log.xml] - only contains the logs/diffs when code in the AMBER directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/nab_log.xml nab_log.xml] - only contains the logs/diffs when code in the NAB directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/disconnect_log.xml disconnect_log.xml] - only contains the logs/diffs when code in the DISCONNECT directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/doc_log.xml doc_log.xml] - only contains the logs/diffs when code in the DOC directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/scripts_log.xml scripts_log.xml] - only contains the logs/diffs when code in the SCRIPTS directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/ambertools_log.xml ambertools_log.xml] - only contains the logs/diffs when code in the AMBERTOOLS directory is changed&lt;br /&gt;
&lt;br /&gt;
Using Firefox to look at the feeds is ok, but I suggest you set up a different RSS reader, such as [http://www.google.com/reader Google Reader] to keep them all in one place. Note that Google Reader has a delay in updating the feeds, so you won&#039;t see new logs up to an hour after they are created and are visible with Firefox.&lt;br /&gt;
&lt;br /&gt;
==Access restrictions==&lt;br /&gt;
Raven authentication has been re-enabled for the diffs only. You can view the feeds without logging in (as this allows email clients to be used), but if you are outside the department, you will need to log in via Raven to access the diffs linked from the feed. Also, the diffs directory is now excluded from Google and other bot searches. This should hopefully prevent chunks of code ending up on Google.&lt;br /&gt;
&lt;br /&gt;
==Usage tips==&lt;br /&gt;
What follows are a few suggestions to make using SVN a joyful experience. Please add any others you think are important! In general you should:&lt;br /&gt;
&lt;br /&gt;
* please be careful with svn commands. svn is a very powerful tool, and it is possible to revert all changes made over a long period, which is probably not what you want. Please consult before you revert changes made by other people. &lt;br /&gt;
* make sure you followed the setup instructions on the [[SVN setup]] page. The procedure includes setting up a template file to ensure that all logs are in the same basic format&lt;br /&gt;
* run &#039;svn update&#039; on a regular basis to ensure you have the latest bug fixes. It is also important to update regularly to detect any bugs that may have appeared due to changes elsewhere in the code. Please report any changes in behaviour that might indicate bugs.&lt;br /&gt;
* &#039;&#039;&#039;always&#039;&#039;&#039; do an &#039;svn update&#039; before you do &#039;svn commit&#039;&lt;br /&gt;
* don&#039;t forget &#039;svn add &amp;lt;filename&amp;gt;&#039; to schedule a new file for addition at the next commit!&lt;br /&gt;
* get into the habit of running &#039;svn diff&#039; before committing.  And looking closely at the output... Optionally, run &#039;svn diff | grep Index&#039; to see a summary list of the files that you have changed.  Try not to commit extra things you didn&#039;t mean to.&lt;br /&gt;
* commit your changes regularly. If you wait a two months before commit your changes, you&#039;re likely to have to resolve more conflicts in parts of the code that has been changed by others in the meantime.&lt;br /&gt;
* if you get a conflict (C flag) when running &#039;svn update&#039;, find out who introduced the change and talk to them about it to make sure you can introduce your code without damaging theirs.&lt;br /&gt;
* consider making a development branch (more on this on the [[SVN setup]] page) if your hacking is going to involve a lot of potentially disruptive changes over a significant period of time.  Talk to DJW and current developers about this first though...&lt;br /&gt;
&lt;br /&gt;
* to obtain a previous version of one file. In the svn directory where that file lives use svn -r with the revision number, the file name, and pipe the output to a target. For example, to obtain revision 36240 of commons.f90 use&lt;br /&gt;
svn -r 36240 cat commons.f90 &amp;gt; commons.f90.36240&lt;br /&gt;
* to see a log of svn commits use svn log in the appropriate directory. The log contains the changes entered by the user who made the commit. Please fill in relevant fields for the log when you make changes.&lt;br /&gt;
&lt;br /&gt;
==Papers==&lt;br /&gt;
We have a repository for preparing papers too: see [[Papers_in_preparation]] for information.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1561</id>
		<title>Wales Group Version control</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1561"/>
		<updated>2020-01-03T10:35:07Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* Usage tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As of 16th July 2008, all the group code is under version control. [[GMIN]], [[OPTIM]] and [[PATHSAMPLE]], along with the [[CHARMM]] and [[AMBER]] code used with them have been added to the repository, along with the official documentation. Changes to the documentation in the repository are automatically applied to the version on the website every night (at midnight) and the RSS feeds below are updated every ten minutes. The next step is to set up the nightly compilation and test suite for each code.&lt;br /&gt;
&lt;br /&gt;
There are instructions on how to set up your SVN details and obtain the group code on the [[SVN setup]] page.&lt;br /&gt;
&lt;br /&gt;
==RSS feeds==&lt;br /&gt;
&lt;br /&gt;
RSS feeds (including diffs of files that have been changed) are generated every 10 minutes from the SVN log files any uploaded to the Wales group web server [http://www-wales.ch.cam.ac.uk/rss/ here].  There is currently one feed per top level directory in the repository:&lt;br /&gt;
&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/trunk_log.xml trunk_log.xml] - contains the logs/diffs for every change to the code&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/gmin_log.xml gmin_log.xml] - only contains the logs/diffs when code in the GMIN directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/pathsample_log.xml pathsample_log.xml] - only contains the logs/diffs when code in the PATHSAMPLE directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/optim_log.xml optim_log.xml] - only contains the logs/diffs when code in the OPTIM directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/charmm_log.xml charmm_log.xml] - only contains the logs/diffs when code in the CHARMM31 directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/amber_log.xml amber_log.xml] - only contains the logs/diffs when code in the AMBER directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/nab_log.xml nab_log.xml] - only contains the logs/diffs when code in the NAB directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/disconnect_log.xml disconnect_log.xml] - only contains the logs/diffs when code in the DISCONNECT directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/doc_log.xml doc_log.xml] - only contains the logs/diffs when code in the DOC directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/scripts_log.xml scripts_log.xml] - only contains the logs/diffs when code in the SCRIPTS directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/ambertools_log.xml ambertools_log.xml] - only contains the logs/diffs when code in the AMBERTOOLS directory is changed&lt;br /&gt;
&lt;br /&gt;
Using Firefox to look at the feeds is ok, but I suggest you set up a different RSS reader, such as [http://www.google.com/reader Google Reader] to keep them all in one place. Note that Google Reader has a delay in updating the feeds, so you won&#039;t see new logs up to an hour after they are created and are visible with Firefox.&lt;br /&gt;
&lt;br /&gt;
==Access restrictions==&lt;br /&gt;
Raven authentication has been re-enabled for the diffs only. You can view the feeds without logging in (as this allows email clients to be used), but if you are outside the department, you will need to log in via Raven to access the diffs linked from the feed. Also, the diffs directory is now excluded from Google and other bot searches. This should hopefully prevent chunks of code ending up on Google.&lt;br /&gt;
&lt;br /&gt;
==Usage tips==&lt;br /&gt;
What follows are a few suggestions to make using SVN a joyful experience. Please add any others you think are important! In general you should:&lt;br /&gt;
&lt;br /&gt;
* please be careful with svn commands. svn is a very powerful tool, and it is possible to revert all changes made over a long period, which is probably not what you want&lt;br /&gt;
* make sure you followed the setup instructions on the [[SVN setup]] page. The procedure includes setting up a template file to ensure that all logs are in the same basic format&lt;br /&gt;
* run &#039;svn update&#039; on a regular basis to ensure you have the latest bug fixes. It is also important to update regularly to detect any bugs that may have appeared due to changes elsewhere in the code. Please report any changes in behaviour that might indicate bugs.&lt;br /&gt;
* &#039;&#039;&#039;always&#039;&#039;&#039; do an &#039;svn update&#039; before you do &#039;svn commit&#039;&lt;br /&gt;
* don&#039;t forget &#039;svn add &amp;lt;filename&amp;gt;&#039; to schedule a new file for addition at the next commit!&lt;br /&gt;
* get into the habit of running &#039;svn diff&#039; before committing.  And looking closely at the output... Optionally, run &#039;svn diff | grep Index&#039; to see a summary list of the files that you have changed.  Try not to commit extra things you didn&#039;t mean to.&lt;br /&gt;
* commit your changes regularly. If you wait a two months before commit your changes, you&#039;re likely to have to resolve more conflicts in parts of the code that has been changed by others in the meantime.&lt;br /&gt;
* if you get a conflict (C flag) when running &#039;svn update&#039;, find out who introduced the change and talk to them about it to make sure you can introduce your code without damaging theirs.&lt;br /&gt;
* consider making a development branch (more on this on the [[SVN setup]] page) if your hacking is going to involve a lot of potentially disruptive changes over a significant period of time.  Talk to DJW and current developers about this first though...&lt;br /&gt;
&lt;br /&gt;
* to obtain a previous version of one file. In the svn directory where that file lives use svn -r with the revision number, the file name, and pipe the output to a target. For example, to obtain revision 36240 of commons.f90 use&lt;br /&gt;
svn -r 36240 cat commons.f90 &amp;gt; commons.f90.36240&lt;br /&gt;
* to see a log of svn commits use svn log in the appropriate directory. The log contains the changes entered by the user who made the commit. Please fill in relevant fields for the log when you make changes.&lt;br /&gt;
&lt;br /&gt;
==Papers==&lt;br /&gt;
We have a repository for preparing papers too: see [[Papers_in_preparation]] for information.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1560</id>
		<title>Wales Group Version control</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Wales_Group_Version_control&amp;diff=1560"/>
		<updated>2020-01-03T10:32:54Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* Usage tips */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As of 16th July 2008, all the group code is under version control. [[GMIN]], [[OPTIM]] and [[PATHSAMPLE]], along with the [[CHARMM]] and [[AMBER]] code used with them have been added to the repository, along with the official documentation. Changes to the documentation in the repository are automatically applied to the version on the website every night (at midnight) and the RSS feeds below are updated every ten minutes. The next step is to set up the nightly compilation and test suite for each code.&lt;br /&gt;
&lt;br /&gt;
There are instructions on how to set up your SVN details and obtain the group code on the [[SVN setup]] page.&lt;br /&gt;
&lt;br /&gt;
==RSS feeds==&lt;br /&gt;
&lt;br /&gt;
RSS feeds (including diffs of files that have been changed) are generated every 10 minutes from the SVN log files any uploaded to the Wales group web server [http://www-wales.ch.cam.ac.uk/rss/ here].  There is currently one feed per top level directory in the repository:&lt;br /&gt;
&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/trunk_log.xml trunk_log.xml] - contains the logs/diffs for every change to the code&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/gmin_log.xml gmin_log.xml] - only contains the logs/diffs when code in the GMIN directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/pathsample_log.xml pathsample_log.xml] - only contains the logs/diffs when code in the PATHSAMPLE directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/optim_log.xml optim_log.xml] - only contains the logs/diffs when code in the OPTIM directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/charmm_log.xml charmm_log.xml] - only contains the logs/diffs when code in the CHARMM31 directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/amber_log.xml amber_log.xml] - only contains the logs/diffs when code in the AMBER directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/nab_log.xml nab_log.xml] - only contains the logs/diffs when code in the NAB directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/disconnect_log.xml disconnect_log.xml] - only contains the logs/diffs when code in the DISCONNECT directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/doc_log.xml doc_log.xml] - only contains the logs/diffs when code in the DOC directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/scripts_log.xml scripts_log.xml] - only contains the logs/diffs when code in the SCRIPTS directory is changed&lt;br /&gt;
* [http://www-wales.ch.cam.ac.uk/rss/ambertools_log.xml ambertools_log.xml] - only contains the logs/diffs when code in the AMBERTOOLS directory is changed&lt;br /&gt;
&lt;br /&gt;
Using Firefox to look at the feeds is ok, but I suggest you set up a different RSS reader, such as [http://www.google.com/reader Google Reader] to keep them all in one place. Note that Google Reader has a delay in updating the feeds, so you won&#039;t see new logs up to an hour after they are created and are visible with Firefox.&lt;br /&gt;
&lt;br /&gt;
==Access restrictions==&lt;br /&gt;
Raven authentication has been re-enabled for the diffs only. You can view the feeds without logging in (as this allows email clients to be used), but if you are outside the department, you will need to log in via Raven to access the diffs linked from the feed. Also, the diffs directory is now excluded from Google and other bot searches. This should hopefully prevent chunks of code ending up on Google.&lt;br /&gt;
&lt;br /&gt;
==Usage tips==&lt;br /&gt;
What follows are a few suggestions to make using SVN a joyful experience. Please add any others you think are important! In general you should:&lt;br /&gt;
&lt;br /&gt;
* please be careful with svn commands. svn is a very powerful tool, and it is possible to revert all changes made over a long period, which is probably not what you want&lt;br /&gt;
* make sure you followed the setup instructions on the [[SVN setup]] page. The procedure includes setting up a template file to ensure that all logs are in the same basic format&lt;br /&gt;
* run &#039;svn update&#039; on a regular basis to ensure you have the latest bug fixes&lt;br /&gt;
* &#039;&#039;&#039;always&#039;&#039;&#039; do an &#039;svn update&#039; before you do &#039;svn commit&#039;&lt;br /&gt;
* don&#039;t forget &#039;svn add &amp;lt;filename&amp;gt;&#039; to schedule a new file for addition at the next commit!&lt;br /&gt;
* get into the habit of running &#039;svn diff&#039; before committing.  And looking closely at the output... Optionally, run &#039;svn diff | grep Index&#039; to see a summary list of the files that you have changed.  Try not to commit extra things you didn&#039;t mean to.&lt;br /&gt;
* commit your changes regularly. If you wait a two months before commit your changes, you&#039;re likely to have to resolve more conflicts in parts of the code that has been changed by others in the meantime.&lt;br /&gt;
* if you get a conflict (C flag) when running &#039;svn update&#039;, find out who introduced the change and talk to them about it to make sure you can introduce your code without damaging theirs.&lt;br /&gt;
* consider making a development branch (more on this on the [[SVN setup]] page) if your hacking is going to involve a lot of potentially disruptive changes over a significant period of time.  Talk to DJW and current developers about this first though...&lt;br /&gt;
&lt;br /&gt;
* to obtain a previous version of one file. In the svn directory where that file lives use svn -r with the revision number, the file name, and pipe the output to a target. For example, to obtain revision 36240 of commons.f90 use&lt;br /&gt;
svn -r 36240 cat commons.f90 &amp;gt; commons.f90.36240&lt;br /&gt;
* to see a log of svn commits use svn log in the appropriate directory. The log contains the changes entered by the user who made the commit. Please fill in relevant fields for the log when you make changes.&lt;br /&gt;
&lt;br /&gt;
==Papers==&lt;br /&gt;
We have a repository for preparing papers too: see [[Papers_in_preparation]] for information.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Common_setup_problem_:_No_Frequency_Warning&amp;diff=1558</id>
		<title>Common setup problem : No Frequency Warning</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Common_setup_problem_:_No_Frequency_Warning&amp;diff=1558"/>
		<updated>2019-07-18T15:17:41Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you use NOFRQS in the PATHDATA file then, make sure to use NOFRQS in the ODATA.CONNECT file.&lt;br /&gt;
&lt;br /&gt;
For example if the PATHDATA file looks like following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SLURM&lt;br /&gt;
CYCLES 100&lt;br /&gt;
ADDPATH path.info.first&lt;br /&gt;
TEMPERATURE 0.592&lt;br /&gt;
PLANCK  9.536D-14&lt;br /&gt;
EXEC /home/ab2480/svn/OPTIM/builds/AMBER/A12OPTIM&lt;br /&gt;
NATOMS 68&lt;br /&gt;
COPYFILES perm.allow min.in coords.prmtop coords.inpcrd&lt;br /&gt;
COPYOPTIM&lt;br /&gt;
DIRECTION BA&lt;br /&gt;
CONNECTREGION 1 2 200&lt;br /&gt;
CONNECTIONS 1&lt;br /&gt;
ITOL 10.0D0&lt;br /&gt;
NOFRQS&lt;br /&gt;
GEOMDIFFTOL 0.3D0&lt;br /&gt;
EDIFFTOL 1.0D-4&lt;br /&gt;
PERMDIST&lt;br /&gt;
AMBER12&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
ODATA.CONNECT file should be :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
REOPTIMISEENDPOINTS&lt;br /&gt;
UPDATES 20 4 20 10 4&lt;br /&gt;
NEWCONNECT 20 3 10.0 40.0 50 2.0 0.001&lt;br /&gt;
NEWNEB 40 400 0.1&lt;br /&gt;
NEBK 10.0&lt;br /&gt;
NOCISTRANS&lt;br /&gt;
DIJKSTRA EXP&lt;br /&gt;
DUMPALLPATHS&lt;br /&gt;
CHECKCHIRALITY&lt;br /&gt;
EDIFFTOL  1.0D-4&lt;br /&gt;
MAXERISE 1.0D-4 1.0D-2&lt;br /&gt;
GEOMDIFFTOL  0.1D0&lt;br /&gt;
BFGSTS 1000 20 200 0.01 100&lt;br /&gt;
BFGSMIN 1.0D-6&lt;br /&gt;
NOHESS&lt;br /&gt;
NOFRQS&lt;br /&gt;
PERMDIST&lt;br /&gt;
MAXSTEP  0.2&lt;br /&gt;
TRAD     0.5&lt;br /&gt;
MAXMAX   1.0&lt;br /&gt;
BFGSCONV 1.0D-6&lt;br /&gt;
PUSHOPT 0.2 0.001 100&lt;br /&gt;
STEPS 1000&lt;br /&gt;
BFGSSTEPS 60000&lt;br /&gt;
MAXBFGS 0.2&lt;br /&gt;
AMBER12 start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that it is &#039;&#039;&#039;essential&#039;&#039;&#039; to have AMBER12 start at the end. PATHSAMPLE creates start.&amp;lt;pid&amp;gt; and finish.&amp;lt;pid&amp;gt; files, which will be copied to start and finish when OPTIM runs locally.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Connecting_Sub-databases&amp;diff=1526</id>
		<title>Connecting Sub-databases</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Connecting_Sub-databases&amp;diff=1526"/>
		<updated>2019-05-17T10:42:19Z</updated>

		<summary type="html">&lt;p&gt;Dw34: /* Executive Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Definitions ==&lt;br /&gt;
&lt;br /&gt;
For the purposes of this tutorial, I am defining sub-databases to be sets of connected minima and transition states within a larger database. &lt;br /&gt;
&lt;br /&gt;
== Context and Motivation ==&lt;br /&gt;
&lt;br /&gt;
In databases containing many thousands of minima and TSs, it is unlikely that these will all be connected to one another. This is particularly the case when the database has been grown using such methods as &#039;&#039;&#039;ADDPATH&#039;&#039;&#039; and &#039;&#039;&#039;MERGEDB&#039;&#039;&#039;. Instead, the database is more likely to consist of many sub-databases of varying size. Therefore, when constructing a disconnectivity graph, which cannot plot more than one set of connected minima (i.e. more than one sub-database) at a time, a lot of data present in the min.data, points.min, points.ts and ts.data files is ignored. The sub-database that the disconnectivity graph plots depends on the numerical argument to the keyword &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; in the dinfo file. These numerical arguments correspond to minima, as listed in the min.data file. For example, an argument of 12 corresponds to line 12 of the min.data file. Therefore, only this minimum plus any others it is connected to, are plotted in the disconnectivity graph.&lt;br /&gt;
&lt;br /&gt;
The question, therefore, is how to efficiently connect minima already present in the min.data file. It would be particularly important to connect sub-databases with a lot of minima in them (it would probably be a waste of time to connect all those sub-databases with only 2 minima in them, for example, as by doing so you’re not collecting much more information).&lt;br /&gt;
&lt;br /&gt;
Another consideration is that we want the connection attempts between sub-databases to be efficient. We want to try to connect sub-databases that are closer to one another (or, more specifically, sub-databases which have at least one minimum which is close in chemical space to a minimum in another sub-database). This consideration is especially important for large systems (such as large proteins with cofactors) as trying to connect minima far apart in space can be very slow or even break down due to memory issues.&lt;br /&gt;
&lt;br /&gt;
=== Systems for which this approach might be particularly useful ===&lt;br /&gt;
&lt;br /&gt;
This methodology might be particularly useful for cases where you have a protein with a cofactor and various sites within a pocket that you think the cofactor can attach to. It provides an efficient method to connect these sites within the pocket, having already sampled each.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Using disconnectionDPS to determine the breakdown of sub-databases within your database ==&lt;br /&gt;
&lt;br /&gt;
=== Requirements ===&lt;br /&gt;
&lt;br /&gt;
A folder containing the files min.data, points.min, points.ts, ts.data, dinfo, the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; (to be found in the svn at &#039;&#039;&#039;~svn/SCRIPTS/DISCONNECT&#039;&#039;&#039;) and the binary [[disconnectionDPS]], plus any other auxiliary files you may need.&lt;br /&gt;
&lt;br /&gt;
=== Method ===&lt;br /&gt;
&lt;br /&gt;
In dinfo, you need to use the keyword &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039;. An example dinfo file that I&#039;ve used is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
! REQUIRED KEYWORDS&lt;br /&gt;
&lt;br /&gt;
DELTA 0.25&lt;br /&gt;
FIRST -15120.0&lt;br /&gt;
LEVELS 800&lt;br /&gt;
MINIMA min.data&lt;br /&gt;
TS ts.data&lt;br /&gt;
&lt;br /&gt;
! OPTIONAL KEYWORDS&lt;br /&gt;
&lt;br /&gt;
NCONNMIN 0&lt;br /&gt;
CONNECTMIN 1&lt;br /&gt;
LABELFORMAT F8.1&lt;br /&gt;
PRINTCONNECTED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; ensures that a file called &#039;&#039;&#039;connected&#039;&#039;&#039; is written, which lists all of the minima plotted in the disconnectivity graph (i.e. all of the minima present in the sub-database considered). In the example above, because the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; is 1, this means that minimum 1, and all those minima to which 1 is connected, gets plotted.&lt;br /&gt;
&lt;br /&gt;
This gives us information on only one sub-database present in the file. To find out about all of them, the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; is used.&lt;br /&gt;
&lt;br /&gt;
This script cycles through the min.data file, executing a [[disconnectionDPS]] command for every iteration of the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039;. The &#039;&#039;&#039;connected&#039;&#039;&#039; file produced is then renamed &#039;&#039;&#039;connected_*&#039;&#039;&#039; where * is the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; when that [[disconnectionDPS]] command was executed. For a min.data file with 17603 lines (and therefore 17603 minima) for example, the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; therefore ranges from CONNECTMIN 1 to CONNECTMIN 17603. If a minimum is already present in a previous &#039;&#039;&#039;connected_*&#039;&#039;&#039; file then that argument is skipped. For example, if a [[disconnectionDPS]] execution when the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; was set to &#039;&#039;&#039;CONNECTMIN 1&#039;&#039;&#039; gave a sub-database with minima 1 and 2 (i.e. the minima on lines 1 and 2 in min.data) in it, then a [[disconnectionDPS]] attempt using &#039;&#039;&#039;CONNECTMIN 2&#039;&#039;&#039; will not be attempted as minimum 2 is already assigned to the sub-database described in &#039;&#039;&#039;connected_1&#039;&#039;&#039;. The next iteration will be using &#039;&#039;&#039;CONNECTMIN 3&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This script cycles until all the minima in min.data have been considered.&lt;br /&gt;
&lt;br /&gt;
Another feature of &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; is that, when &#039;&#039;&#039;connected_*&#039;&#039;&#039; files exceed a set number of minima (I think 10 is sensible) then they get copied to a corresponding &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; file, eg if &#039;&#039;&#039;connected_3&#039;&#039;&#039; has 50 minima then it exceeds 10 and so the information in this file is copied to another one called &#039;&#039;&#039;relevant_connected_3&#039;&#039;&#039;. This is a piece of book-keeping which allows the user to identify more easily larger sub-databases (and so ones that s/he is more likely to want to connect to one another).&lt;br /&gt;
&lt;br /&gt;
A few notes on use: to use this script, it is sensible to copy the min.data, points.min, points.ts and ts.data files of the database you are interested in to another folder. The only other files you need are the script itself, the relevant binary and dinfo (plus perhaps some case-specific auxiliary files). It should be ensured that before executing the binary, the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; in dinfo is 1. Also, &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; must be included as a keyword.&lt;br /&gt;
&lt;br /&gt;
== Step 2: &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; and &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
So, we now have a list of files &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; corresponding to sub-databases which we would like to connect.&lt;br /&gt;
&lt;br /&gt;
Remember, though, we want to connect them efficiently!&lt;br /&gt;
&lt;br /&gt;
Before attempting any connections then, it is probably advised to get a flavour of the distances separating these sub-databases from one another (or, at least, the shortest distance possible between any two minima of all of the sub-databases).&lt;br /&gt;
&lt;br /&gt;
To do this, we need to limit the min.data file (and ts.data) in a sub-folder so that only those minima corresponding to the two sub-databases we are interested in are considered. We can use the keyword found in [[PATHSAMPLE]], &#039;&#039;&#039;RETAINSP&#039;&#039;&#039;, for this purpose. By using an adapted version of &#039;&#039;&#039;CONNECTUNC&#039;&#039;&#039; with a new argument called &#039;&#039;&#039;LOWESTTEST&#039;&#039;&#039;, we can identify sensible connections to make, without actually attempting the connection.&lt;br /&gt;
&lt;br /&gt;
This approach works as long as min.A and min.B both correspond to minima in the AB set (this is accounted for in the script I’ve written, &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039;). What this  does is find the unconnected minima (i.e. those in the set which is not the AB set) of lowest energy. It then loops through all the minima in the AB set, printing the distance between each pair of minima without actually attempting the connection. A further loop operates so that all unconnected minima are considered too.&lt;br /&gt;
&lt;br /&gt;
Once all minima are considered, the loop is abruptly exited by a STOP statement.&lt;br /&gt;
&lt;br /&gt;
Using grep (don&#039;t worry about executing these commands yourself as they are all contained in the script &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039;):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
grep &amp;quot;connectlowest&amp;gt; Distance: &amp;quot; pathsample_connectunc_test.out &amp;gt; distances&lt;br /&gt;
sed -e &amp;quot;s/^/$dirname  /g&amp;quot; distances &amp;gt; distances_tmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we are able to build up a list of all the proposed connections made by &#039;&#039;&#039;CONNECTLOWESTTEST&#039;&#039;&#039; between the two chosen sub-databases. This information then gets concatenated into an overall file called distances_tot in the folder where the script was originally launched. Eventually, once all pairs of sub-databases are considered, we should have a massive file listing all of the potential connections between all of the minima in all of the sub-databases, along with the distances separating them. An example of a few lines from such a file is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
00003_00303       359    4550 connectlowest&amp;gt; Distance:    42.42963734&lt;br /&gt;
00003_00303       341     147 connectlowest&amp;gt; Distance:    39.39663225&lt;br /&gt;
00003_02150      2280    1932 connectlowest&amp;gt; Distance:    75.54181654&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column lists the two sub-databases which were considered. Another nice feature of the &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; keyword and argument is that, alongside the distance, the specific minima (DMIN1 and DMIN2) from the two sub-databases being considered are listed (highlighted in red below):&lt;br /&gt;
&lt;br /&gt;
 00003_00303      &amp;lt;font color=&amp;quot;#ff0000&amp;quot;&amp;gt; 359 &amp;lt;/font&amp;gt;  &amp;lt;font color=&amp;quot;#ff0000&amp;quot;&amp;gt; 4550 &amp;lt;/font&amp;gt; connectlowest&amp;gt; Distance:    42.42963734&lt;br /&gt;
&lt;br /&gt;
359, therefore, is a minimum which belongs to sub-database 00003 (i.e. the sub-database described by the file &#039;&#039;&#039;relevant_connected_00003&#039;&#039;&#039;) and 4550 a minimum which belongs to sub-database 00303.&lt;br /&gt;
&lt;br /&gt;
== Step 3: Organising Calculations to Attempt ==&lt;br /&gt;
&lt;br /&gt;
Clearly, a pair of minima separated by 39.397 is a more feasible calculation to make than one separated by 42.430 or 75.542. The script we have (&#039;&#039;&#039;connect_sub_databases&#039;&#039;&#039;) therefore reorganises distances_tot to list the proposed connections between pairs from shortest distance to longest. This new reorganised file we give the rather unimaginative name of lowest_to_highest_distances_tot.&lt;br /&gt;
&lt;br /&gt;
The rest of the script is concerned with connecting all of the sub-databases in as efficient a way, using as few steps, as possible. This is probably best illustrated by an example:&lt;br /&gt;
&lt;br /&gt;
I have 15 sub-databases I wish to connect. The minima comprising each can be found in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
relevant_connected_00003&lt;br /&gt;
relevant_connected_00164&lt;br /&gt;
relevant_connected_00303&lt;br /&gt;
relevant_connected_02150&lt;br /&gt;
relevant_connected_06061&lt;br /&gt;
relevant_connected_06274&lt;br /&gt;
relevant_connected_06610&lt;br /&gt;
relevant_connected_06913&lt;br /&gt;
relevant_connected_07339&lt;br /&gt;
relevant_connected_09000&lt;br /&gt;
relevant_connected_09969&lt;br /&gt;
relevant_connected_10040&lt;br /&gt;
relevant_connected_12405&lt;br /&gt;
relevant_connected_14191&lt;br /&gt;
relevant_connected_14775&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here are the first ten lines of  lowest_to_highest_distances_tot. Those coloured green are connections attempted, whilst those coloured red were skipped over because they turn out to be superfluous (why attempt line 5, for example, when line 4 is attempting to connect the same two sub-databases?):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_12405      3003    3033 connectlowest&amp;gt; Distance:     0.84958725 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_10040      1228    1251 connectlowest&amp;gt; Distance:     1.01130262 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_14191      3176    3209 connectlowest&amp;gt; Distance:     1.07183817 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3194    3209 connectlowest&amp;gt; Distance:     1.81036433 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3193    3187 connectlowest&amp;gt; Distance:     1.88481550 &amp;lt;/font&amp;gt; &lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_14775      3450    3457 connectlowest&amp;gt; Distance:     2.41249957 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3203    3187 connectlowest&amp;gt; Distance:     2.42913148 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3177    3209 connectlowest&amp;gt; Distance:     2.45715932 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;00003_02150      2572    2663 connectlowest&amp;gt; Distance:     2.82747537 &amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using these principles, the sub-databases were therefore connected as follows. &lt;br /&gt;
&lt;br /&gt;
The first line of lowest_to_highest_distances_tot:&lt;br /&gt;
&lt;br /&gt;
 00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080&lt;br /&gt;
&lt;br /&gt;
[[Image:first step.png|230px|center]]&lt;br /&gt;
&lt;br /&gt;
After next line:&lt;br /&gt;
&lt;br /&gt;
 09000_12405      3003    3033 connectlowest&amp;gt; Distance:     0.84958725&lt;br /&gt;
&lt;br /&gt;
[[Image:second step.png|250px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_10040      1228    1251 connectlowest&amp;gt; Distance:     1.01130262&lt;br /&gt;
&lt;br /&gt;
[[Image:third step.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_14191      3176    3209 connectlowest&amp;gt; Distance:     1.07183817&lt;br /&gt;
&lt;br /&gt;
[[Image:fourth step.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_14775      3450    3457 connectlowest&amp;gt; Distance:     2.41249957&lt;br /&gt;
&lt;br /&gt;
[[Image:fifth step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_00164       134     159 connectlowest&amp;gt; Distance:     5.00815137&lt;br /&gt;
&lt;br /&gt;
[[Image:sixth step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 06061_06913       402     296 connectlowest&amp;gt; Distance:     5.01723232&lt;br /&gt;
&lt;br /&gt;
[[Image:seventh step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_07339      3893    3899 connectlowest&amp;gt; Distance:     5.68186344&lt;br /&gt;
&lt;br /&gt;
[[Image:eighth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 06610_07339       135     137 connectlowest&amp;gt; Distance:     7.04874883&lt;br /&gt;
&lt;br /&gt;
[[Image:ninth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_00303       670     811 connectlowest&amp;gt; Distance:    24.67395896&lt;br /&gt;
&lt;br /&gt;
[[Image:tenth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 06061_06274       257     459 connectlowest&amp;gt; Distance:    31.59473639&lt;br /&gt;
&lt;br /&gt;
[[Image:eleventh step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_09969      1317    1946 connectlowest&amp;gt; Distance:    40.39286979&lt;br /&gt;
&lt;br /&gt;
[[Image:twelfth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_06061      4149    4481 connectlowest&amp;gt; Distance:    44.36489142&lt;br /&gt;
&lt;br /&gt;
[[Image:thirteenth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_09000      7369    2632 connectlowest&amp;gt; Distance:    71.98718590&lt;br /&gt;
&lt;br /&gt;
[[Image:fourteenth step.png|700px|center]]&lt;br /&gt;
&lt;br /&gt;
Connection attempts are therefore chosen using as small distances as possible. It should also be possible to connect these 15 sub-databases using 14 connections. Some [[OPTIM]] runs for whatever reason may not work (although this is unlikely) in which case the user can manually look through the lowest_to_highest_distances_tot file to find the next most suitable pair of minima from the two sub-databases in question to connect.&lt;br /&gt;
&lt;br /&gt;
The diagrams above list 14 connections which are recommended to be made. What the script therefore does is to create 14 folders for these connections to be attempted. These are named as 00003_02150, 09000_12405 etc.&lt;br /&gt;
&lt;br /&gt;
Within each sub-folder, a [[PATHSAMPLE]] calculation using &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; is first of all performed. This ensures that only the minima pertaining to the two sub-databases we are trying to connect are included (eg 00003 and 02150 in the case of the sub-folder 00003_02150) and that the numbering scheme in min.data is consistent with when &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; was used. Once this is done, the pathdata file is altered so that rather than &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; we now wish to use the keywords &#039;&#039;&#039;CONNECTPAIRS connectfile&#039;&#039;&#039; and &#039;&#039;&#039;CYCLES 1&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
What &#039;&#039;&#039;CONNECTPAIRS&#039;&#039;&#039; does is to chose specific minima from min.data for connection attempts. This requires an argument which specifies a file to be read in order to list the minima we want to connect. Typically, this argument is &#039;&#039;&#039;connectfile&#039;&#039;&#039;, and therefore we require a file called &#039;&#039;&#039;connectfile&#039;&#039;&#039;. This file lists all of the minima we wish to connect according to their line number in min.data. As shown above (and copied immediately below) for the connection we wish to make between the sub-databases 00003 and 02150, the minima we are interested in are 2657 and 2663.&lt;br /&gt;
&lt;br /&gt;
 00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080&lt;br /&gt;
&lt;br /&gt;
Therefore, &#039;&#039;&#039;connectfile&#039;&#039;&#039;, upon opening, should appear simply as:&lt;br /&gt;
&lt;br /&gt;
 2657 2663&lt;br /&gt;
&lt;br /&gt;
Thus, an OPTIM job gets launched, which tries to connect minima 2657 (in the 00003 sub-database) and 2663 (in the 02150 sub-database). Because these minima are separated by a really short distance of 0.394, it will hopefully (this is not foolproof!) be a simple connection.&lt;br /&gt;
&lt;br /&gt;
In this example, following the use of the script, we should end up with 14 folders, each with a connected path between two sub-databases.&lt;br /&gt;
&lt;br /&gt;
== How to execute steps 2 and 3? ==&lt;br /&gt;
&lt;br /&gt;
Steps 2 and 3 above clearly involves a number of steps. A script has been written which does all of this (i.e. chooses which connections to make and then attempts them) for you. This script is called &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; and can be found in the svn at &#039;&#039;&#039;~/svn/SCRIPTS/PATHSAMPLE/connecting_sub_databases&#039;&#039;&#039;. An annotated version of this script is also available.&lt;br /&gt;
&lt;br /&gt;
The following files are required in the folder from which you launch this script (assuming you are using the [[AMBER]] interface):&lt;br /&gt;
&lt;br /&gt;
coords.inpcrd, coord.mdcrd, coords.prmtop, min.in, min.A, min.B, min.data, odata.connect, pathdata, perm.allow, points.min, points.ts, ts.data, untrap_sub_script, relevant_connected_*&lt;br /&gt;
&lt;br /&gt;
Where relevant_connected_* are all of the sub-databases found in step 1. This could be any number of files. Note that untrap_sub_script just happens to be the name of the sub-script I used for my calculations. You can either rename your sub-script to untrap_sub_script or alter &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; before launching your calculations. It is essential that your pathdata file is correctly formatted before launching the script too. Two lines&lt;br /&gt;
&lt;br /&gt;
 RETAINSP&lt;br /&gt;
 ! CYCLES 1&lt;br /&gt;
&lt;br /&gt;
must be included. An example of a pathdata file I used is as follows:&lt;br /&gt;
&lt;br /&gt;
 EXEC           /home/adk44/bin/CUDAOPTIM_ppt_final_210918&lt;br /&gt;
 CPUS           1&lt;br /&gt;
 NATOMS         5430&lt;br /&gt;
 SEED           1&lt;br /&gt;
 DIRECTION      AB&lt;br /&gt;
 CONNECTIONS    1&lt;br /&gt;
 TEMPERATURE    0.592&lt;br /&gt;
 PLANCK         9.536D-14&lt;br /&gt;
 &lt;br /&gt;
 PERMDIST&lt;br /&gt;
 ETOL           8D-4&lt;br /&gt;
 GEOMDIFFTOL    0.2D0&lt;br /&gt;
 ITOL           0.1D0&lt;br /&gt;
 NOINVERSION&lt;br /&gt;
 NOFRQS&lt;br /&gt;
 &lt;br /&gt;
 RETAINSP&lt;br /&gt;
 ! CYCLES 1&lt;br /&gt;
 &lt;br /&gt;
 AMBER12&lt;br /&gt;
&lt;br /&gt;
Make sure that this pathdata points towards a valid binary and that the number of atoms is consistent with the system you are examining.&lt;br /&gt;
&lt;br /&gt;
== Step 4: Merging the sub-databases ==&lt;br /&gt;
&lt;br /&gt;
Following the use of the script, we should end up with x number of folders, each with a connected path between two sub-databases. The total number of folders will encompass the total number of connections which needed to be made in order to connect all of the sub-databases we were interested in.&lt;br /&gt;
&lt;br /&gt;
Assuming all of the connections were successfully made, all we need to do now is to merge these new connected databases together. This can be achieved using the &#039;&#039;&#039;MERGEDB&#039;&#039;&#039; keyword in [[PATHSAMPLE]], as is outlined [http://www-wales.ch.cam.ac.uk/PATHSAMPLE.2.1.doc/node5.html here].&lt;br /&gt;
&lt;br /&gt;
== Summary for those who don&#039;t like screeds of writing ==&lt;br /&gt;
&lt;br /&gt;
It is possible that your [[PATHSAMPLE]] database contains many sub-databases not necessarily connected to one another. Therefore, a lot of information is lost when you try to construct disconnectivity graphs.&lt;br /&gt;
&lt;br /&gt;
To retrieve this information, we should therefore connect these sub-databases. This connection can be done efficiently by first identifying which minima contained in respective sub-databases are closest to each other, and then trying to connect these first.&lt;br /&gt;
&lt;br /&gt;
To first identify the sub-databases present in your database, launch the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; using &#039;&#039;&#039;disconnectionDPS&#039;&#039;&#039; with the &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; keyword included and the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; set to &#039;&#039;&#039;1&#039;&#039;&#039;. This produces a list of &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; files, each one representing a sub-database which lists the minima present in it.&lt;br /&gt;
&lt;br /&gt;
Following this step, the script &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; is used. It determines the distances between all of the possible minima in all of the possible sub-databases. It then prioritises calculations in order to connect the maximum number of sub-databases in the minimum possible number of steps, with those of shortest distance being attempted first.&lt;br /&gt;
&lt;br /&gt;
A number of folders are created, each providing a connected path between two sub-databases. The data from these folders can be merged into the original using the [[PATHSAMPLE]] keyword &#039;&#039;&#039;MERGEDB&#039;&#039;&#039;, thus connecting all of the previously unconnected sub-databases.&lt;br /&gt;
&lt;br /&gt;
This methodology is particularly useful for cases where you have a protein with a cofactor and various sites within a pocket that you think the cofactor can attach to. It provides an efficient method to connect these sites within the pocket, having already sampled each.&lt;br /&gt;
&lt;br /&gt;
--adk44 14.30, 16 May 2019 (BST)&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Connecting_Sub-databases&amp;diff=1525</id>
		<title>Connecting Sub-databases</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Connecting_Sub-databases&amp;diff=1525"/>
		<updated>2019-05-17T10:40:57Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Definitions ==&lt;br /&gt;
&lt;br /&gt;
For the purposes of this tutorial, I am defining sub-databases to be sets of connected minima and transition states within a larger database. &lt;br /&gt;
&lt;br /&gt;
== Context and Motivation ==&lt;br /&gt;
&lt;br /&gt;
In databases containing many thousands of minima and TSs, it is unlikely that these will all be connected to one another. This is particularly the case when the database has been grown using such methods as &#039;&#039;&#039;ADDPATH&#039;&#039;&#039; and &#039;&#039;&#039;MERGEDB&#039;&#039;&#039;. Instead, the database is more likely to consist of many sub-databases of varying size. Therefore, when constructing a disconnectivity graph, which cannot plot more than one set of connected minima (i.e. more than one sub-database) at a time, a lot of data present in the min.data, points.min, points.ts and ts.data files is ignored. The sub-database that the disconnectivity graph plots depends on the numerical argument to the keyword &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; in the dinfo file. These numerical arguments correspond to minima, as listed in the min.data file. For example, an argument of 12 corresponds to line 12 of the min.data file. Therefore, only this minimum plus any others it is connected to, are plotted in the disconnectivity graph.&lt;br /&gt;
&lt;br /&gt;
The question, therefore, is how to efficiently connect minima already present in the min.data file. It would be particularly important to connect sub-databases with a lot of minima in them (it would probably be a waste of time to connect all those sub-databases with only 2 minima in them, for example, as by doing so you’re not collecting much more information).&lt;br /&gt;
&lt;br /&gt;
Another consideration is that we want the connection attempts between sub-databases to be efficient. We want to try to connect sub-databases that are closer to one another (or, more specifically, sub-databases which have at least one minimum which is close in chemical space to a minimum in another sub-database). This consideration is especially important for large systems (such as large proteins with cofactors) as trying to connect minima far apart in space can be very slow or even break down due to memory issues.&lt;br /&gt;
&lt;br /&gt;
=== Systems for which this approach might be particularly useful ===&lt;br /&gt;
&lt;br /&gt;
This methodology might be particularly useful for cases where you have a protein with a cofactor and various sites within a pocket that you think the cofactor can attach to. It provides an efficient method to connect these sites within the pocket, having already sampled each.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Using disconnectionDPS to determine the breakdown of sub-databases within your database ==&lt;br /&gt;
&lt;br /&gt;
=== Requirements ===&lt;br /&gt;
&lt;br /&gt;
A folder containing the files min.data, points.min, points.ts, ts.data, dinfo, the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; (to be found in the svn at &#039;&#039;&#039;~svn/SCRIPTS/DISCONNECT&#039;&#039;&#039;) and the binary [[disconnectionDPS]], plus any other auxiliary files you may need.&lt;br /&gt;
&lt;br /&gt;
=== Method ===&lt;br /&gt;
&lt;br /&gt;
In dinfo, you need to use the keyword &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039;. An example dinfo file that I&#039;ve used is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
! REQUIRED KEYWORDS&lt;br /&gt;
&lt;br /&gt;
DELTA 0.25&lt;br /&gt;
FIRST -15120.0&lt;br /&gt;
LEVELS 800&lt;br /&gt;
MINIMA min.data&lt;br /&gt;
TS ts.data&lt;br /&gt;
&lt;br /&gt;
! OPTIONAL KEYWORDS&lt;br /&gt;
&lt;br /&gt;
NCONNMIN 0&lt;br /&gt;
CONNECTMIN 1&lt;br /&gt;
LABELFORMAT F8.1&lt;br /&gt;
PRINTCONNECTED&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; ensures that a file called &#039;&#039;&#039;connected&#039;&#039;&#039; is written, which lists all of the minima plotted in the disconnectivity graph (i.e. all of the minima present in the sub-database considered). In the example above, because the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; is 1, this means that minimum 1, and all those minima to which 1 is connected, gets plotted.&lt;br /&gt;
&lt;br /&gt;
This gives us information on only one sub-database present in the file. To find out about all of them, the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; is used.&lt;br /&gt;
&lt;br /&gt;
This script cycles through the min.data file, executing a [[disconnectionDPS]] command for every iteration of the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039;. The &#039;&#039;&#039;connected&#039;&#039;&#039; file produced is then renamed &#039;&#039;&#039;connected_*&#039;&#039;&#039; where * is the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; when that [[disconnectionDPS]] command was executed. For a min.data file with 17603 lines (and therefore 17603 minima) for example, the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; therefore ranges from CONNECTMIN 1 to CONNECTMIN 17603. If a minimum is already present in a previous &#039;&#039;&#039;connected_*&#039;&#039;&#039; file then that argument is skipped. For example, if a [[disconnectionDPS]] execution when the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; was set to &#039;&#039;&#039;CONNECTMIN 1&#039;&#039;&#039; gave a sub-database with minima 1 and 2 (i.e. the minima on lines 1 and 2 in min.data) in it, then a [[disconnectionDPS]] attempt using &#039;&#039;&#039;CONNECTMIN 2&#039;&#039;&#039; will not be attempted as minimum 2 is already assigned to the sub-database described in &#039;&#039;&#039;connected_1&#039;&#039;&#039;. The next iteration will be using &#039;&#039;&#039;CONNECTMIN 3&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
This script cycles until all the minima in min.data have been considered.&lt;br /&gt;
&lt;br /&gt;
Another feature of &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; is that, when &#039;&#039;&#039;connected_*&#039;&#039;&#039; files exceed a set number of minima (I think 10 is sensible) then they get copied to a corresponding &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; file, eg if &#039;&#039;&#039;connected_3&#039;&#039;&#039; has 50 minima then it exceeds 10 and so the information in this file is copied to another one called &#039;&#039;&#039;relevant_connected_3&#039;&#039;&#039;. This is a piece of book-keeping which allows the user to identify more easily larger sub-databases (and so ones that s/he is more likely to want to connect to one another).&lt;br /&gt;
&lt;br /&gt;
A few notes on use: to use this script, it is sensible to copy the min.data, points.min, points.ts and ts.data files of the database you are interested in to another folder. The only other files you need are the script itself, the relevant binary and dinfo (plus perhaps some case-specific auxiliary files). It should be ensured that before executing the binary, the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; in dinfo is 1. Also, &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; must be included as a keyword.&lt;br /&gt;
&lt;br /&gt;
== Step 2: &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; and &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
So, we now have a list of files &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; corresponding to sub-databases which we would like to connect.&lt;br /&gt;
&lt;br /&gt;
Remember, though, we want to connect them efficiently!&lt;br /&gt;
&lt;br /&gt;
Before attempting any connections then, it is probably advised to get a flavour of the distances separating these sub-databases from one another (or, at least, the shortest distance possible between any two minima of all of the sub-databases).&lt;br /&gt;
&lt;br /&gt;
To do this, we need to limit the min.data file (and ts.data) in a sub-folder so that only those minima corresponding to the two sub-databases we are interested in are considered. We can use the keyword found in [[PATHSAMPLE]], &#039;&#039;&#039;RETAINSP&#039;&#039;&#039;, for this purpose. By using an adapted version of &#039;&#039;&#039;CONNECTUNC&#039;&#039;&#039; with a new argument called &#039;&#039;&#039;LOWESTTEST&#039;&#039;&#039;, we can identify sensible connections to make, without actually attempting the connection.&lt;br /&gt;
&lt;br /&gt;
This approach works as long as min.A and min.B both correspond to minima in the AB set (this is accounted for in the script I’ve written, &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039;). What this  does is find the unconnected minima (i.e. those in the set which is not the AB set) of lowest energy. It then loops through all the minima in the AB set, printing the distance between each pair of minima without actually attempting the connection. A further loop operates so that all unconnected minima are considered too.&lt;br /&gt;
&lt;br /&gt;
Once all minima are considered, the loop is abruptly exited by a STOP statement.&lt;br /&gt;
&lt;br /&gt;
Using grep (don&#039;t worry about executing these commands yourself as they are all contained in the script &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039;):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
grep &amp;quot;connectlowest&amp;gt; Distance: &amp;quot; pathsample_connectunc_test.out &amp;gt; distances&lt;br /&gt;
sed -e &amp;quot;s/^/$dirname  /g&amp;quot; distances &amp;gt; distances_tmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we are able to build up a list of all the proposed connections made by &#039;&#039;&#039;CONNECTLOWESTTEST&#039;&#039;&#039; between the two chosen sub-databases. This information then gets concatenated into an overall file called distances_tot in the folder where the script was originally launched. Eventually, once all pairs of sub-databases are considered, we should have a massive file listing all of the potential connections between all of the minima in all of the sub-databases, along with the distances separating them. An example of a few lines from such a file is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
00003_00303       359    4550 connectlowest&amp;gt; Distance:    42.42963734&lt;br /&gt;
00003_00303       341     147 connectlowest&amp;gt; Distance:    39.39663225&lt;br /&gt;
00003_02150      2280    1932 connectlowest&amp;gt; Distance:    75.54181654&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column lists the two sub-databases which were considered. Another nice feature of the &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; keyword and argument is that, alongside the distance, the specific minima (DMIN1 and DMIN2) from the two sub-databases being considered are listed (highlighted in red below):&lt;br /&gt;
&lt;br /&gt;
 00003_00303      &amp;lt;font color=&amp;quot;#ff0000&amp;quot;&amp;gt; 359 &amp;lt;/font&amp;gt;  &amp;lt;font color=&amp;quot;#ff0000&amp;quot;&amp;gt; 4550 &amp;lt;/font&amp;gt; connectlowest&amp;gt; Distance:    42.42963734&lt;br /&gt;
&lt;br /&gt;
359, therefore, is a minimum which belongs to sub-database 00003 (i.e. the sub-database described by the file &#039;&#039;&#039;relevant_connected_00003&#039;&#039;&#039;) and 4550 a minimum which belongs to sub-database 00303.&lt;br /&gt;
&lt;br /&gt;
== Step 3: Organising Calculations to Attempt ==&lt;br /&gt;
&lt;br /&gt;
Clearly, a pair of minima separated by 39.397 is a more feasible calculation to make than one separated by 42.430 or 75.542. The script we have (&#039;&#039;&#039;connect_sub_databases&#039;&#039;&#039;) therefore reorganises distances_tot to list the proposed connections between pairs from shortest distance to longest. This new reorganised file we give the rather unimaginative name of lowest_to_highest_distances_tot.&lt;br /&gt;
&lt;br /&gt;
The rest of the script is concerned with connecting all of the sub-databases in as efficient a way, using as few steps, as possible. This is probably best illustrated by an example:&lt;br /&gt;
&lt;br /&gt;
I have 15 sub-databases I wish to connect. The minima comprising each can be found in:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
relevant_connected_00003&lt;br /&gt;
relevant_connected_00164&lt;br /&gt;
relevant_connected_00303&lt;br /&gt;
relevant_connected_02150&lt;br /&gt;
relevant_connected_06061&lt;br /&gt;
relevant_connected_06274&lt;br /&gt;
relevant_connected_06610&lt;br /&gt;
relevant_connected_06913&lt;br /&gt;
relevant_connected_07339&lt;br /&gt;
relevant_connected_09000&lt;br /&gt;
relevant_connected_09969&lt;br /&gt;
relevant_connected_10040&lt;br /&gt;
relevant_connected_12405&lt;br /&gt;
relevant_connected_14191&lt;br /&gt;
relevant_connected_14775&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here are the first ten lines of  lowest_to_highest_distances_tot. Those coloured green are connections attempted, whilst those coloured red were skipped over because they turn out to be superfluous (why attempt line 5, for example, when line 4 is attempting to connect the same two sub-databases?):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_12405      3003    3033 connectlowest&amp;gt; Distance:     0.84958725 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_10040      1228    1251 connectlowest&amp;gt; Distance:     1.01130262 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_14191      3176    3209 connectlowest&amp;gt; Distance:     1.07183817 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3194    3209 connectlowest&amp;gt; Distance:     1.81036433 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3193    3187 connectlowest&amp;gt; Distance:     1.88481550 &amp;lt;/font&amp;gt; &lt;br /&gt;
 &amp;lt;font color=&amp;quot;#33FF00 &amp;quot;&amp;gt;09000_14775      3450    3457 connectlowest&amp;gt; Distance:     2.41249957 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3203    3187 connectlowest&amp;gt; Distance:     2.42913148 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;09000_14191      3177    3209 connectlowest&amp;gt; Distance:     2.45715932 &amp;lt;/font&amp;gt;&lt;br /&gt;
 &amp;lt;font color=&amp;quot;#ff0000 &amp;quot;&amp;gt;00003_02150      2572    2663 connectlowest&amp;gt; Distance:     2.82747537 &amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using these principles, the sub-databases were therefore connected as follows. &lt;br /&gt;
&lt;br /&gt;
The first line of lowest_to_highest_distances_tot:&lt;br /&gt;
&lt;br /&gt;
 00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080&lt;br /&gt;
&lt;br /&gt;
[[Image:first step.png|230px|center]]&lt;br /&gt;
&lt;br /&gt;
After next line:&lt;br /&gt;
&lt;br /&gt;
 09000_12405      3003    3033 connectlowest&amp;gt; Distance:     0.84958725&lt;br /&gt;
&lt;br /&gt;
[[Image:second step.png|250px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_10040      1228    1251 connectlowest&amp;gt; Distance:     1.01130262&lt;br /&gt;
&lt;br /&gt;
[[Image:third step.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_14191      3176    3209 connectlowest&amp;gt; Distance:     1.07183817&lt;br /&gt;
&lt;br /&gt;
[[Image:fourth step.png|400px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_14775      3450    3457 connectlowest&amp;gt; Distance:     2.41249957&lt;br /&gt;
&lt;br /&gt;
[[Image:fifth step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_00164       134     159 connectlowest&amp;gt; Distance:     5.00815137&lt;br /&gt;
&lt;br /&gt;
[[Image:sixth step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 06061_06913       402     296 connectlowest&amp;gt; Distance:     5.01723232&lt;br /&gt;
&lt;br /&gt;
[[Image:seventh step.png|420px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_07339      3893    3899 connectlowest&amp;gt; Distance:     5.68186344&lt;br /&gt;
&lt;br /&gt;
[[Image:eighth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 06610_07339       135     137 connectlowest&amp;gt; Distance:     7.04874883&lt;br /&gt;
&lt;br /&gt;
[[Image:ninth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_00303       670     811 connectlowest&amp;gt; Distance:    24.67395896&lt;br /&gt;
&lt;br /&gt;
[[Image:tenth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 06061_06274       257     459 connectlowest&amp;gt; Distance:    31.59473639&lt;br /&gt;
&lt;br /&gt;
[[Image:eleventh step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 09000_09969      1317    1946 connectlowest&amp;gt; Distance:    40.39286979&lt;br /&gt;
&lt;br /&gt;
[[Image:twelfth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_06061      4149    4481 connectlowest&amp;gt; Distance:    44.36489142&lt;br /&gt;
&lt;br /&gt;
[[Image:thirteenth step.png|450px|center]]&lt;br /&gt;
&lt;br /&gt;
 00003_09000      7369    2632 connectlowest&amp;gt; Distance:    71.98718590&lt;br /&gt;
&lt;br /&gt;
[[Image:fourteenth step.png|700px|center]]&lt;br /&gt;
&lt;br /&gt;
Connection attempts are therefore chosen using as small distances as possible. It should also be possible to connect these 15 sub-databases using 14 connections. Some [[OPTIM]] runs for whatever reason may not work (although this is unlikely) in which case the user can manually look through the lowest_to_highest_distances_tot file to find the next most suitable pair of minima from the two sub-databases in question to connect.&lt;br /&gt;
&lt;br /&gt;
The diagrams above list 14 connections which are recommended to be made. What the script therefore does is to create 14 folders for these connections to be attempted. These are named as 00003_02150, 09000_12405 etc.&lt;br /&gt;
&lt;br /&gt;
Within each sub-folder, a [[PATHSAMPLE]] calculation using &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; is first of all performed. This ensures that only the minima pertaining to the two sub-databases we are trying to connect are included (eg 00003 and 02150 in the case of the sub-folder 00003_02150) and that the numbering scheme in min.data is consistent with when &#039;&#039;&#039;CONNECTUNC LOWESTTEST&#039;&#039;&#039; was used. Once this is done, the pathdata file is altered so that rather than &#039;&#039;&#039;RETAINSP&#039;&#039;&#039; we now wish to use the keywords &#039;&#039;&#039;CONNECTPAIRS connectfile&#039;&#039;&#039; and &#039;&#039;&#039;CYCLES 1&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
What &#039;&#039;&#039;CONNECTPAIRS&#039;&#039;&#039; does is to chose specific minima from min.data for connection attempts. This requires an argument which specifies a file to be read in order to list the minima we want to connect. Typically, this argument is &#039;&#039;&#039;connectfile&#039;&#039;&#039;, and therefore we require a file called &#039;&#039;&#039;connectfile&#039;&#039;&#039;. This file lists all of the minima we wish to connect according to their line number in min.data. As shown above (and copied immediately below) for the connection we wish to make between the sub-databases 00003 and 02150, the minima we are interested in are 2657 and 2663.&lt;br /&gt;
&lt;br /&gt;
 00003_02150      2657    2663 connectlowest&amp;gt; Distance:     0.39352080&lt;br /&gt;
&lt;br /&gt;
Therefore, &#039;&#039;&#039;connectfile&#039;&#039;&#039;, upon opening, should appear simply as:&lt;br /&gt;
&lt;br /&gt;
 2657 2663&lt;br /&gt;
&lt;br /&gt;
Thus, an OPTIM job gets launched, which tries to connect minima 2657 (in the 00003 sub-database) and 2663 (in the 02150 sub-database). Because these minima are separated by a really short distance of 0.394, it will hopefully (this is not foolproof!) be a simple connection.&lt;br /&gt;
&lt;br /&gt;
In this example, following the use of the script, we should end up with 14 folders, each with a connected path between two sub-databases.&lt;br /&gt;
&lt;br /&gt;
== How to execute steps 2 and 3? ==&lt;br /&gt;
&lt;br /&gt;
Steps 2 and 3 above clearly involves a number of steps. A script has been written which does all of this (i.e. chooses which connections to make and then attempts them) for you. This script is called &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; and can be found in the svn at &#039;&#039;&#039;~/svn/SCRIPTS/PATHSAMPLE/connecting_sub_databases&#039;&#039;&#039;. An annotated version of this script is also available.&lt;br /&gt;
&lt;br /&gt;
The following files are required in the folder from which you launch this script (assuming you are using the [[AMBER]] interface):&lt;br /&gt;
&lt;br /&gt;
coords.inpcrd, coord.mdcrd, coords.prmtop, min.in, min.A, min.B, min.data, odata.connect, pathdata, perm.allow, points.min, points.ts, ts.data, untrap_sub_script, relevant_connected_*&lt;br /&gt;
&lt;br /&gt;
Where relevant_connected_* are all of the sub-databases found in step 1. This could be any number of files. Note that untrap_sub_script just happens to be the name of the sub-script I used for my calculations. You can either rename your sub-script to untrap_sub_script or alter &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; before launching your calculations. It is essential that your pathdata file is correctly formatted before launching the script too. Two lines&lt;br /&gt;
&lt;br /&gt;
 RETAINSP&lt;br /&gt;
 ! CYCLES 1&lt;br /&gt;
&lt;br /&gt;
must be included. An example of a pathdata file I used is as follows:&lt;br /&gt;
&lt;br /&gt;
 EXEC           /home/adk44/bin/CUDAOPTIM_ppt_final_210918&lt;br /&gt;
 CPUS           1&lt;br /&gt;
 NATOMS         5430&lt;br /&gt;
 SEED           1&lt;br /&gt;
 DIRECTION      AB&lt;br /&gt;
 CONNECTIONS    1&lt;br /&gt;
 TEMPERATURE    0.592&lt;br /&gt;
 PLANCK         9.536D-14&lt;br /&gt;
 &lt;br /&gt;
 PERMDIST&lt;br /&gt;
 ETOL           8D-4&lt;br /&gt;
 GEOMDIFFTOL    0.2D0&lt;br /&gt;
 ITOL           0.1D0&lt;br /&gt;
 NOINVERSION&lt;br /&gt;
 NOFRQS&lt;br /&gt;
 &lt;br /&gt;
 RETAINSP&lt;br /&gt;
 ! CYCLES 1&lt;br /&gt;
 &lt;br /&gt;
 AMBER12&lt;br /&gt;
&lt;br /&gt;
Make sure that this pathdata points towards a valid binary and that the number of atoms is consistent with the system you are examining.&lt;br /&gt;
&lt;br /&gt;
== Step 4: Merging the sub-databases ==&lt;br /&gt;
&lt;br /&gt;
Following the use of the script, we should end up with x number of folders, each with a connected path between two sub-databases. The total number of folders will encompass the total number of connections which needed to be made in order to connect all of the sub-databases we were interested in.&lt;br /&gt;
&lt;br /&gt;
Assuming all of the connections were successfully made, all we need to do now is to merge these new connected databases together. This can be achieved using the &#039;&#039;&#039;MERGEDB&#039;&#039;&#039; keyword in [[PATHSAMPLE]], as is outlined [http://www-wales.ch.cam.ac.uk/PATHSAMPLE.2.1.doc/node5.html here].&lt;br /&gt;
&lt;br /&gt;
== Summary for those who don&#039;t like screeds of writing ==&lt;br /&gt;
&lt;br /&gt;
It is possible that your [[PATHSAMPLE]] database contains many sub-databases not necessarily connected to one another. Therefore, a lot of information is lost when you try to construct disconnectivity graphs.&lt;br /&gt;
&lt;br /&gt;
To retrieve this information, we should therefore connect these sub-databases. This can be done efficiently by first identifying which minima contained in respective sub-databases are closest to each other, and then trying to connect these first.&lt;br /&gt;
&lt;br /&gt;
To at first identify the sub-databases present in your database, launch the script &#039;&#039;&#039;find_connections.sh&#039;&#039;&#039; using &#039;&#039;&#039;disconnectionDPS&#039;&#039;&#039; with the &#039;&#039;&#039;PRINTCONNECTED&#039;&#039;&#039; keyword included and the argument to &#039;&#039;&#039;CONNECTMIN&#039;&#039;&#039; set to &#039;&#039;&#039;1&#039;&#039;&#039;. This produces a list of &#039;&#039;&#039;relevant_connected_*&#039;&#039;&#039; files, each one representing a sub-database which lists the minima present in it.&lt;br /&gt;
&lt;br /&gt;
Following this step, the script &#039;&#039;&#039;connect_sub_databases.sh&#039;&#039;&#039; is launched. It determines the distances between all of the possible minima in all of the possible sub-databases. It then prioritises calculations in order to connect the maximum number of sub-databases in the minimum possible number of steps, with those of shortest distance being attempted first.&lt;br /&gt;
&lt;br /&gt;
The outcome to the launch of this latter script is that a number of folders are created, each providing a connected path between two sub-databases. The data from these folders can be merged into the original using the [[PATHSAMPLE]] keyword &#039;&#039;&#039;MERGEDB&#039;&#039;&#039;, thus connecting all of the previously unconnected sub-databases.&lt;br /&gt;
&lt;br /&gt;
This methodology is particularly useful for cases where you have a protein with a cofactor and various sites within a pocket that you think the cofactor can attach to. It provides an efficient method to connect these sites within the pocket, having already sampled each.&lt;br /&gt;
&lt;br /&gt;
--adk44 14.30, 16 May 2019 (BST)&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Main_Page&amp;diff=1122</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Main_Page&amp;diff=1122"/>
		<updated>2019-05-10T15:54:19Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;Welcome to the Wales group software wiki&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For info on compiling our code from the source tarball, see the [[Compiling Wales Group code using CMake | cmake]] page.&lt;br /&gt;
&lt;br /&gt;
== Comprehensive Contents Page ==&lt;br /&gt;
Please click [[Comprehensive Contents Page | here]] for a comprehensive, organised list of all of the pages comprising this wiki + some other useful links.&lt;br /&gt;
&lt;br /&gt;
= Group Software =&lt;br /&gt;
All of our software is freely available under the [http://www.gnu.org/licenses/gpl.html GPL]. However, there are cases when we interface to commercial codes such as [http://ambermd.org/ AMBER] and [http://www.charmm.org/ CHARMM]. These files are absent from the source tarball. If you do have a license, please contact Professor Wales for access to a full version. We work on three separate programs:&lt;br /&gt;
&lt;br /&gt;
*[[GMIN]]: A program for finding global minima and calculating thermodynamic properties from basin-sampling.&lt;br /&gt;
GMIN employs the basin-hopping algorithm described by Wales and Doye (&#039;&#039;J. Phys. Chem. A, 101, 5111, 1997&#039;&#039;[http://pubs.acs.org/doi/abs/10.1021/jp970984n]) to locate global minima on a potential energy surface. Many potentials are included. The latest version also includes an implementation of basin-sampling as described in T.V. Bogdan, D.J. Wales and F. Calvo (&#039;&#039;J. Chem. Phys., 124, 044102, 2006&#039;&#039;[http://www-wales.ch.cam.ac.uk/pdf/JCP.124.044102.2006.pdf]).&lt;br /&gt;
&lt;br /&gt;
*[[OPTIM]]: A program for optimizing geometries and calculating reaction pathways&lt;br /&gt;
The geometry optimization scheme in OPTIM is based on eigenvector-following and was originally built from the optimizer in the ACES package written by Prof. John F. Stanton. OPTIM has analytic first and second derivatives coded for dozens of empirical potentials, and can also treat systems involving periodic boundary conditions and solve general optimization problems such as least squares fits. &lt;br /&gt;
&lt;br /&gt;
*[[PATHSAMPLE]]: A driver for OPTIM to create stationary point databases using discrete path sampling and perform kinetic analysis.&lt;br /&gt;
&lt;br /&gt;
= Helpful Software = &lt;br /&gt;
We have developed many scripts within the group to use in conjunction with our software - all of which are provided here. We also use other programs which are linked below:&lt;br /&gt;
&lt;br /&gt;
*[[DisconnectionDPS]]&lt;br /&gt;
&lt;br /&gt;
= Contact details =&lt;br /&gt;
&lt;br /&gt;
If you have something to add to this wiki, or would like to contribute code, please get in touch with Professor Wales.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
	<entry>
		<id>https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Main_Page&amp;diff=1117</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wikis.ch.cam.ac.uk/ro-walesdocs/wiki/index.php?title=Main_Page&amp;diff=1117"/>
		<updated>2019-05-09T13:08:35Z</updated>

		<summary type="html">&lt;p&gt;Dw34: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;&#039;&#039;&#039;Welcome to the Wales group software wiki!&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For info on compiling our code from the source tarball, see the [[Compiling Wales Group code using CMake | cmake]] page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Group Software =&lt;br /&gt;
All of our software is freely available under the [http://www.gnu.org/licenses/gpl.html GPL]. However, there are cases when we interface to commercial codes such as [http://ambermd.org/ AMBER] and [http://www.charmm.org/ CHARMM]. These files are absent from the source tarball. If you do have a license, please contact Professor Wales for access to a full version. We work on three separate programs:&lt;br /&gt;
&lt;br /&gt;
*[[GMIN]]: A program for finding global minima and calculating thermodynamic properties from basin-sampling.&lt;br /&gt;
GMIN employs the basin-hopping algorithm described by Wales and Doye (&#039;&#039;J. Phys. Chem. A, 101, 5111, 1997&#039;&#039;[http://pubs.acs.org/doi/abs/10.1021/jp970984n]) to locate global minima on a potential energy surface. Many potentials are included. The latest version also includes an implementation of basin-sampling as described in T.V. Bogdan, D.J. Wales and F. Calvo (&#039;&#039;J. Chem. Phys., 124, 044102, 2006&#039;&#039;[http://www-wales.ch.cam.ac.uk/pdf/JCP.124.044102.2006.pdf]).&lt;br /&gt;
&lt;br /&gt;
*[[OPTIM]]: A program for optimizing geometries and calculating reaction pathways&lt;br /&gt;
The geometry optimization scheme in OPTIM is based on eigenvector-following and was originally built from the optimizer in the ACES package written by Prof. John F. Stanton. OPTIM has analytic first and second derivatives coded for dozens of empirical potentials, and can also treat systems involving periodic boundary conditions and solve general optimization problems such as least squares fits. &lt;br /&gt;
&lt;br /&gt;
*[[PATHSAMPLE]]: A driver for OPTIM to create stationary point databases using discrete path sampling and perform kinetic analysis.&lt;br /&gt;
&lt;br /&gt;
= Helpful Software = &lt;br /&gt;
We have developed many scripts within the group to use in conjunction with our software - all of which are provided here. We also use other programs which are linked below:&lt;br /&gt;
&lt;br /&gt;
*[[DisconnectionDPS]]&lt;br /&gt;
&lt;br /&gt;
= Contact details =&lt;br /&gt;
&lt;br /&gt;
If you have something to add to this wiki, or would like to contribute code, please get in touch with Professor Wales.&lt;/div&gt;</summary>
		<author><name>Dw34</name></author>
	</entry>
</feed>