Pele

From Docswiki
Revision as of 17:05, 10 May 2019 by Adk44 (talk | contribs) (Created page with "pele is a package written mostly in python that reproduces some of the core functionality of GMIN, OPTIM, and PATHSAMPLE. It is not meant to be a replacement for them, but ra...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

pele is a package written mostly in python that reproduces some of the core functionality of GMIN, OPTIM, and PATHSAMPLE. It is not meant to be a replacement for them, but rather as a playground for quickly testing new ideas, finding optimal parameters for the various routines in GMIN and OPTIM, and for easily visualizing your system. For most systems, you'll still want to do production runs in the fortran code. It will be faster, and through years of work will have more features and be more robust.

Code

The source code repository is hosted publicly on github.com.

https://github.com/pele-python/pele

We use the version control system git which is a more modern, flexible version of subversion. Here is a gentle introduction to git for svn users.

Documentation

The documentation is also hosted on github. We have tried to be good about documenting everything, so I hope you'll find the documentation pretty thorough

http://pele-python.github.io/pele/

Core Routines

The core routines are the same as in GMIN and OPTIM. Basinhopping to find the global minimum and other relevant minima. And double ended connect runs to find the transition state pathways between minima.

Visualization

pele has a built in GUI which is very useful for visualizing your system. You can launch basinhopping jobs, select two minima and launch double ended connect jobs. You can visualize what is happening during an NEB run, or during a transition state search or during a minimization. You can watch the energy of an NEB relax which can be really helpful for determining optimal parameters. You can also generate interactive disconnectivity graphs, which can be used visually determine if parts of your system have an unphysically high barrier and launch new double ended connect jobs to try to find better paths.

Unfortunately since this is the part of the code that's changing the most rapidly, it's also the least documented part. See http://pele-python.github.io/pele/gui.html or the files in examples/gui/ for a good place to get started. On the plus side once you get it running it's fairly self explanatory.

A very basic form of the GUI can be run directly from the command line and will read all relevant information from the PATHSAMPLE min.data and ts.data files. Use the script in the group svn `svn/SCRIPTS/PATHSAMPLE/pele_gui.py` to launch it.

This basic GUI has no 3d visualization of the system, however you can tell the GUI how to draw your system simply by implementing the `draw()` function in the system class. For examples of what this looks like see "pele/systems/ljcluster.py", `pele/systems/bljcluster.py`, and `pele/amber/amberSystem.py`.

For amber there is a ready-made script for launching the GUI with visualization in the group svn `svn/SCRIPTS/PATHSAMPLE/pele_gui_amber.py`

Disconnectivity Graph

Disconnectivity graphs have been implemented in pele by rewriting DisconnectDPS in Python. Disconnectivity graphs play an important role in the gui, but you can create them from the command line from pele Database files or from PATHSAMPLE min.data and ts.data files. See the python script pele/scripts/make_disconnectivity_graph.py for details (pass the flag -h for options).

Running pele on the group clusters

To compile pele on sinister or dexter, you will need to checkout pele into your home directory as normal, then ensure that you have the following lines in your .bashrc file:

module load anaconda/python2/2.2.0
module load cmake/3.0.0
export PYTHONPATH=$HOME/workspace/pele  (or whatever location you have used to install pele)
export PYTHONUSERBASE=/myappenv

To compile pele in place for development, which is probably what most people want to do, go to ~/workspace/pele and run

python setup_with_cmake.py build_ext -i 

Using the intel compilers and/or multiple cores will give a faster compilation. Make sure you have the gcc/4.8.3 and icc/64/2015/3/187 modules loaded, then run:

python setup_with_cmake.py build_ext -i -j 7 --compiler=intel --fcompiler=intelem

Before recompiling, to make sure you recompile everything, first run the following in the pele directory:

rm -r build/ cythonize.dat

Parallel pele on the clusters

If you are using pele for production runs and need to use the full potential of the clusters by running jobs in parallel, you will need to use the "concurrent" scheme - at time of writing there's an example of how to use this at pele/examples/parallel_pele. You set up a server somewhere (probably on your department workstation) and use your Torque script to create worker jobs that communicate their results back to the server.

This requires the pyro package, which does not come pre-installed with the anaconda distribution so you will need to install it locally:

(with the anaconda module loaded):
mkdir ~/Soft (or whatever you want to call your local install folder)
pip install -t Soft -b /sharedscratch/$USER/tmp pyro4

Then add the following to your .bashrc file:

export PYTHONPATH=$HOME/Soft/pyro4:$PYTHONPATH
export PYRO_SERIALIZERS_ACCEPTED=serpent,json,marshal,pickle
export PYRO_SERVERTYPE=multiplex
export PYRO_LOGLEVEL=DEBUG
export PYRO_LOGFILE=pyro.log

This should be everything you need to compile pele and run parallel jobs on sinister or dexter.