Numeric Relativity with the Einstein Toolkit
This post finds us at the cutting edge of physics, numerical general relativity. Because we haven't perfected mind-to-mind transmission of information, we won't actually be able to cover in any real detail how this all works. If you are interested, you can check out Wikipedia or Living Reviews. Once you've done that, and maybe taken a few graduate courses too, you can go ahead and read this article.
General relativity, along with quantum mechanics, describes the world as we know it at its most fundamental level. The problem is there is a very small set of solutions to Einstein's equations. And, they are all solutions for idealized situations. Here are the most common ones:
- Schwarzschild: static, spherically symmetric.
- Reissner-Nordstrom: static, spherically symmetric, charged.
- Kerr: rotating, spherically symmetric.
- Kerr, Newman: rotating, spherically symmetric, charged.
In order to study more realistic situations, like a pair of black holes orbiting each other, you need to solve Einstein's equations numerically. Traditionally, this has been done either from scratch by each individual researcher, or you may inherit some previous work from another researcher. But, now there is a project everyone can use, the Einstein Toolkit. The project started out as Cactus Code. Cactus Code is a framework consisting of a central core (called the flesh) and a number of plugins (called thorns). Cactus Code provides a generic framework for scientific computing in any number of fields. The Einstein Toolkit is a fork of Cactus Code with only the thorns you need for numerical relativity.
General relativity is a theory of gravitation, proposed by Einstein, where time is to be considered simply another dimension, like the three spatial ones. So the three space and one time dimensions together give you space-time. Numerical relativity (at least in one of the more common techniques) re-introduces the break between space and time. The basic idea is that you describe space at one instance in time, and then describe with equations how that space changes moving from one time to another. This technique was introduced by Arnowitt, Deser and Misner, and is called the ADM formalism. The code in the Einstein Toolkit uses a variation on this technique.
The toolkit code is available through Subversion and Git. To make checkouts and updates easier on end users, the development team has provided a script called GetComponents. This script expects to use git, so you need git installed on your system. To get it, you can wget it from:
wget http://svn.cactuscode.org/Utilities/branches/ ↪ET_2010_11/Scripts/GetComponents chmod 777 GetComponents
Although there are several options to this script, most people simply will want to use it to grab the latest code for the Einstein Toolkit:
./GetComponents -a http://svn.einsteintoolkit.org/ ↪manifest/branches/ET_2010_11/einsteintoolkit.th
This downloads all of the parts you need to get a running system in the subdirectory Cactus. To update the code, you simply need to run:
./GetComponent -a -u ./einsteintoolkit.th
You can do it this way because the file einsteintoolkit.th actually is downloaded to the current directory by the GetComponents script.
This is pretty heavy-duty number crunching, so you likely will need to make sure you have several other packages installed on your system. You will need a C compiler, a C++ compiler and a FORTRAN compiler. You'll probably want to install MPI as well. File input and output is available in ASCII, but you may want to consider HDF5 for more structured data. Some thorns also may need some specialized libraries, such as LAPACK. This depends on which thorns you actually are using.
The way Einstein Toolkit is set up, you create and use a
configuration for a particular executable. This way, you can have
multiple configurations, which use different thorn combinations, all
from the same core source code. To create a new configuration, it is
as simple as typing
make configname, where configname is the name you give
to the configuration. For the rest of this article, let's play with a
configuration called config1. So you would type
make config1, and
get a new subdirectory called config1 containing all the required
files. Don't forget that this needs to be done from within the Cactus
directory that was created by the GetComponents script. Once this
initialization is done, you can execute several different commands against
this configuration. An example would be
make config1-configinfo, which
prints out the configuration options for this particular configuration
Figure 1. Example Configuration Options
The first step is making sure everything is
configured properly. When you created your new configuration above, the config
command was run for you. If you decide that you actually wanted to
include some other options, you can rerun the config command with
make config1-config <options>, where
<options> are the options you
wanted to set. These options are in the form
<name>=<value>. An example
MPI=MPICH, if you wanted to compile in support for MPICH
parallelism. For now, you can just enter the following to do a basic
make config1-config MPI=MPICH
If you ever want to start over, you can try
make config1-clean or
make config1-realclean. If you are done with
this particular configuration, you can get rid of it completely
Now that everything is configured exactly the way you want it, you
should go ahead and build it. This is done simply with the command
make config1. Now, go off and have a cup of your favourite beverage
while your machine is brought to its knees with the compile. This is a
fairly complex piece of software, so don't be too disappointed if it
doesn't compile cleanly on the first attempt. Just go over the error
messages carefully, and make whatever changes are necessary. The most
likely causes are either that you don't have a needed library installed
or the make system can't find it. Keep iterating through the build
step until you get a fully compiled executable. It should be located in
the subdirectory exe. In this case, you will end up with an executable
You can run some basic tests on this executable with the command
make config1-testsuite. It will ask you some questions as to what you want to
test, but you should be okay if you accept the defaults most of the time. When
you get to the end, you can ask the system to run all of the tests,
run them interactively or choose a particular test to run. Remember,
if you are using MPICH, you need to have mpd running on the relevant hosts
so the test suite will run correctly. This by no means guarantees the
correctness of the code. It's just the first step in the process. As in
any scientific programming, you should make sure the results you're getting are at least plausible.
Now that you have your executable, you need some data to feed it. This is the other side of the problem—the "initial data" problem. The Einstein Toolkit uses a parameter file to hand in the required parameters for all of the thorns being used. The development team has provided some introductory parameter files (located at https://svn.einsteintoolkit.org/cactus/EinsteinExamples/branches/ET_2010_06/par) that beginners can download to learn what is possible. To run your executable, run it as:
If you are running an MPI version, it would look like this:
mpirun -np X cactus_config1 parfile.par
X is the number of CPUs to use, and
parfile.par is the parameter file
As it stands, the Einstein Toolkit provides a very powerful set of tools for doing numerical relativity. But, this is only the beginning. The true power is in its extensibility. It is distributed under the GPL, so you are free to download it and alter it as you see fit. You just have to be willing to share those changes. But, the entire design of the toolkit is based around the idea that you should be able to alter the system easily. It's as simple as writing and including a new thorn. Because you have all the source code for the included thorns, you have some very good examples to look at and learn from. And, because thorns are ideally independent from each other, you should be able to drop in your new thorn easily. The list of thorns to be compiled and linked into the flesh is controlled through the file configs/config1/ThornList.
In case you decide to write your own thorn, I'll cover a bit of the concepts here. A thorn should, ideally, be completely unlinked from any other thorn. Any communication should happen through the flesh. This means that data should be translated into one of the standard formats and handed off to the flesh. The thorns are responsible for everything from IO to data management to the actual number crunching. If you are working on some new algorithm or solution technique, this is where you want to be.
The last step is getting pretty graphics. You likely will want to share your results with others, and that seems to be easiest through pictures. You will want to use other tools, like gnuplot, to generate plots or even movies of the results from your calculations. Several tutorials exist for what you can do with tools like gnuplot.
I hope this has given you enough to get started with a very powerful tool for numerical relativity. And, as always, if there is a subject you'd like to see, please let me know. Until then, keep exploring.
Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
52 min 38 sec ago
4 hours 28 min ago
- Reply to comment | Linux Journal
5 hours 48 sec ago
- All the articles you talked
7 hours 24 min ago
- All the articles you talked
7 hours 27 min ago
- All the articles you talked
7 hours 28 min ago
11 hours 53 min ago
- Keeping track of IP address
13 hours 44 min ago
- Roll your own dynamic dns
18 hours 57 min ago
- Please correct the URL for Salt Stack's web site
22 hours 9 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?