Extreme Graphics with Extrema
To plot two-dimensional data, you can use:
GRAPH x y
where x and y are two vectors of equal length. The default is to draw the data joined by a solid line. If you want your data as a series of disconnected points, you can set the point type to a negative number, for example:
SET PLOTSYMBOL -1
Then you can go ahead and graph your data.
Parametric plots also are possible. Let's say you have an independent variable called t that runs from 0 to 2*Pi. You then can plot t*sin(t) and t*cos(t) with:
t = [0:2*pi:0.1] x = t * sin(t) y = t * cos(t) graph x y
This will give you the plot shown in Figure 4.
Figure 4. Graphing a Parametric Plot
In scientific experiments, you usually have some value for error in your measurements. You can include this in your graphs as an extra parameter to the graph command, assuming these error values are stored in an extra variable. So, you could use:
graph x y yerr
to get a nice plot. Many options are available for the graph command (Figure 5).
Figure 5. The graph command has many available options.
More complicated data can be graphed in three dimensions. There are several types of 3-D graphs, including contour plots and surface plots. The simplest data structure would be a matrix, where the indices represent the x and y values, and the actual numbers in the matrix are the z values. If this doesn't work, you can represent the separate x, y and z values with three different vectors, all of the same length. The most basic contour graph can be made with the command:
where m is the matrix of values to be graphed. In this case, Extrema will make a selection of nice contour lines that create a reasonable graph.
You can draw a density plot of the same data with the density command, where the values in your matrix are assigned a color from a color map, and that is what gets graphed. Unless you say differently, Extrema will try to select a color map that fits your data the best. A surface plot tries to draw a surface in the proper perspective to show what surface is defined by the z values in your data.
Let's finish by looking at one of the more important analysis steps, fitting an equation to your data. The point of much of science is to develop equations that describe the data being observed, in the hope that you then will be able to predict what you would see under different conditions. Also, you may learn some important underlying physics by looking at the structure of the equation that fits your data. Let's look at a simple fitting of a straight line. Let's assume that the data is stored in two vectors called x and y. You'll also need two other variables to store the slope and intercept. Let's call them b and a. Then you can fit your data with the command:
SCALAR\FIT a b FIT y=a+b*x
Then, if you want to graph your straight line fit and your data, you can do something like:
SET PLOTSYMBOL -1 SET PLOTSYMBOLCOLOR RED GRAPH x y SET PLOTSYMBOL 0 SET CURVECOLOR BLUE GRAPH x a+b*x
Now that you have seen the basics of what Extrema can do, hopefully you will be inspired to explore it further. It should be able to meet most of your data-analysis needs, and you can have fun using the same tool that is being used by leading particle physicists.
Joey Bernard has a background in both physics and computer science. This serves him well in his day job as a computational research consultant at the University of New Brunswick. He also teaches computational physics and parallel programming.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
3 hours 49 min ago
- Please correct the URL for Salt Stack's web site
7 hours 1 min ago
- Android is Linux -- why no better inter-operation
9 hours 16 min ago
- Connecting Android device to desktop Linux via USB
9 hours 45 min ago
- Find new cell phone and tablet pc
10 hours 43 min ago
12 hours 12 min ago
- Automatically updating Guest Additions
13 hours 20 min ago
- I like your topic on android
14 hours 7 min ago
- This is the easiest tutorial
20 hours 42 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?