Gri: A Language for Scientific Illustration
Speaking of colors, a common application of Gri is generating illustrations of color images. Within oceanography, such images often fall into two broad categories: fields generated by numerical models and fields generated by satellite observation. In each case, the advantage of Gri over tools that are more image-based is that Gri invites the user to draw other graphical elements as well as the images.
Figure 3 provides a good example, showing some of the work in the ECOLAP program, spearheaded by oceanographers at the Rio Grande University in Brazil (see http://www.peld.furg.br/). This research group has a ten-year program to measure and understand the physical and biological variability of the Brazil Rio Grande estuary and the adjacent sea. Figure 3 shows a satellite image of ocean temperature, and the location of ship-based observations made on 22 February 2000. Land is colored a ruddy brown in the figure, and the palette indicates sea surface temperature as measured by the satellite. The processing of the satellite image is done exactly as described in the more hypothetical example given near the beginning of this article. The two panels are drawn simply by changing axes and redrawing. The palette was drawn with a command called draw palette, the guiding lines were drawn with the commands draw box and draw line and the labels were drawn with draw label. By now, you may be getting the impression that it's pretty easy to guess the names of Gri commands. This guessing isn't necessary; just type “draw” in the Emacs mode and press the ESC key followed by the TAB key, and the mode will display all commands starting with the word “draw”, i.e., all drawing options.
A noteworthy feature of Figure 3 is the use of symbols to indicate the locations at which ship measurements of ocean properties were made. These ship-based observations usually run from the ocean bottom to the surface, and typically involve measurements of physical properties of the water as well as biological properties, such as the occurrence of different species through the depths. In past decades, oceanographers were greatly challenged to explain patterns in ship-based observations, and Figure 3 illustrates why. Consider the ship sample near the middle of the larger image. It lies in a thin filament of cold water (green color), whereas the other samples lie in warmer water. To some extent, the biology is just “along for the ride” as currents move water from one place to another, so it might not be surprising if this middle sample had different biological characteristics (e.g., species typical of cold water) than the nearby stations. The superposition of satellite and ship data on one graph, which is so easy to accomplish in Gri, provides a powerful insight into the systems under study.
It almost goes without saying that the script-based nature of Gri is important in constructing such diagrams. Nothing about this diagram was prepared with a mouse, and nothing in the Gri script requires modification for another cruise of the ship (since query commands are used to set up all file names). As soon as the ship does an observation and a latitude-longitude pair is written in a data file, the Gri script can be rerun and a new diagram prepared.
Understanding turbulent flow is one of the grand challenges in physics, and ocean mixing provides a good example. The ultimate goal is to be able to predict mixing, which is a small-scale phenomenon that is difficult to measure, based on large-scale properties that are easy to measure. One proposed technique is to examine vertical variation of water density on intermediate scales. If this can be done, it will greatly expand our database of ocean mixing knowledge, since density measurements are common. But can it be done? This question was addressed in the Ph.D. dissertation of the second author. Figure 4 shows a diagram patterned after a paper about this work. The illustration shows two things. The red image shows a theoretical prediction of where mixing should be occurring, through time and through depth. The blue-filled curves show where mixing actually occurred.
To be more specific, the red image shows the so-called Richardson number. Theory indicates that the type of mixing known as Kelvin-Helmholtz overturning can occur only when the Richardson number, a measurement of the competition between stabilizing and destabilizing effects, falls below a critical value of 1/4. We measured the variation of the Richardson number over depth and time on a grid. Our first inclination was to contour this, but since we wanted to superimpose other things, we decided to use an image instead for clarity. The gist of the Gri code can be guessed from what you've read so far, the only new feature being the use of the command convert grid to image to transform our grid data into an image that can be colored. We drew the image in shades of red by running the color scale across intensities of the red hue, instead of across hues as in the previous example. We did this because we wanted to superimpose another curve of a certain color, and that would be difficult to discern with a spectrum below. The image indicates that mixing should occur in a band that lies roughly at 17 meters deep and that this mixing band should bob up and down over the time of observation.
With the theory painted in red, we'll turn to the observations. Blue seemed to look pretty, so we started by drawing blue vertical lines corresponding to the times when a density-measuring probe was lowered from the side of the ship. We call the density variation with depth a density “profile”. The seawater density in the ocean normally increases with depth, because heavy water sinks and buoyant water rises. However, eddying mixing motion can overturn this density profile, momentarily lifting heavy water above light water. With sufficiently precise density probes, this sort of mixing can be revealed graphically by plotting the difference between the observed density profile and an artificial profile created by reordering the density data to make density increase monotonically with depth. We keep track of the distance individual points had to be moved in the reordering process and call this the “Thorpe displacement profile”. This profile gives an indication of the intensity and extent of mixing patches. We draw these Thorpe profiles with a filled curve, as in the command
draw curve filled to x 0.0
We do this once for each profile, after first redefining the axes so that x=0 corresponds to the time of the ship observation.
You may agree that the observations are in rough agreement with the theory, since the observed depths and times of high mixing rates (blue curves) appear to match with the theory (red image). The main implication of this is not that low Richardson numbers yield mixing; we already knew that, from experiments in the field and in the laboratory. Rather, the main implication is that our density probe is capable of picking up mixing signals of this particular strength. This is important, because the instrument we were using is much more common than the instruments normally used to measure mixing. For more on how and why the technique works, we encourage you to consult our paper, which, we might add, employs Gri for every figure.
Notice that the axes in this diagram lie outside the box in which the data are drawn. The second author prefers this style, while the first prefers the conventional style. In this, as in most things, Gri offers you a choice.
Peter Galbraith (email@example.com) is a research scientist with the Canadian Department of Fisheries and Oceans.
Dan wrote Gri and Peter wrote the Emacs mode. The fact that neither author is a professional programmer may explain the limited practical nature of these tools.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Petros Koutoupis' RapidDisk
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Varnish Software's Varnish Massive Storage Engine
- Privacy and the New Math
- Firefox 46.0 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide