Tinker with Molecular Dynamics for Fun and Profit
Molecular dynamics computations make up a very large proportion of the computer cycles being used in science today. For those of you who remember chemistry and or thermodynamics, you should recall that all of the calculations you made were based on treating the material in question as a homogeneous mass where each part of the mass simply has the average value of the relevant properties. Under average conditions, this tends be adequate most times. But, more and more scientists were running into conditions that would be on the fringes of where they could apply those types of generalizations.
Enter molecular dynamics, or MD. With MD, you have to move down almost to the lowest level of matter that we know of, the level of atoms and molecules. At this level, most of the forces you are dealing with are electrical in nature. Atoms and molecules interact with each other through their electron clouds. Several packages are available for doing this type of work, such as GROMACS and GAMESS. In this article though, I take look at TINKER.
Unlike most of the software I've covered in this space, TINKER isn't available in the package systems of most distributions. This means you will have to go out and download it from the main Web site. There are binary files for Linux (32-bit and 64-bit), Mac OS X and Windows (32-bit and 64-bit). Although these should work in many cases, you probably will want to download the source code and build it with the exact options you want. You can download either a tarball or a zip file containing the source code for TINKER.
Once it is unpacked, change directory to the tinker subdirectory. There are a number of subdirectories named after the various operating system options available. Because you're using Linux, you will want to move to the linux subdirectory.
will find a series of subdirectories for each of a number of possible
compilers. For this article, I chose to use the gfortran compiler. Inside
the gfortran subdirectory, you will find a number of scripts to handle
each of the build steps. The first step is to run
build all of the required objects. These scripts need to be run from
the location where the source code resides, so once you know which
set of scripts you are going to use, move over to the subdirectory
tinker/source. From here, I ran
to compile all of the source code I needed into object files.
next step is to combine these into a single library file by running
../linux/gfortran/library.make. The last step is to do the linking
with the system libraries to create a final executable. This is done by
You now will have a full set of executable files, recognizable by filenames that end with .x. These executable files then can be moved to any other location to make them easier to use.
You should find that 61 different executable files have been created. Each of these executables handles some separate task in the analyses that TINKER is designed to do. I look at only a few different executables here to give you a flavor of the types of tasks that you can do.
The first is
analyze.x. This executable will ask for
a structure file (in the TINKER .xyz file format) and the type of
analysis to run. The output you get back includes the following items:
the total potential energy of the system; the breakdown of the energy
by potential function type or over individual atoms; the computation
of the total dipole moment and its components, moments of inertia and
radius of gyration; the listing of the parameters used to compute
selected interaction energies; and the energies associated with
specified individual interactions.
The next executable,
performs a molecular dynamic or stochastic dynamic computation. On an
initial computation, it will take a .xyz structure file as input. If a
previous computation was check-pointed, you can use the resultant dynamics
trajectory file (or restart file) as input too. These two programs are
both deterministic in their methods.
monte.x provides a
way to apply Monte Carlo minimization methods to molecular dynamics. It
takes a random step for either a single atom or a single torsional angle,
then applies the Metropolis sampling method.
scan.x executable takes
a .xyz structure file as input and finds an initial local minimum. From
this first local minimum, the program starts searching out along normal
modes to try to find other minima. Once it has searched along each of
these modes, it then will terminate.
A number of these 61 executables are
support utility programs that do non-computational work. For example,
intxyz.x convert back and forth between
the .xyz structure file format and the .int internal coordinates
For all of these programs, the specific details of how they work is determined by a keyword file (with a filename ending with .key). TINKER uses a huge number of keywords to decide the specifics of any particular run. For example, you could set a single bond stretching parameter with the keyword BOND. The keyword CHARGE will set a single atomic partial charge electrostatic parameter. A full listing of the keywords is available in the TINKER documentation.
Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide