Linux in a Scientific Laboratory

The authors tell us how they use Linux daily to fulfill the requirements of their lab.
Programmable Logic Controllers (PLC)

PLC are widely used in the industrial process control environments. (See “Using Linux with Programmable Logic Controllers” by J. P. G. Quintana in Linux Journal, January 1997.) They are descendants of relay-based control systems and are built out of modular I/O blocks governed by a dedicated microcontroller. The available modules include digital and analog I/O, temperature control, motion control, etc. They trade off simplicity and low speed for low cost and industrial grade reliability; they are also very nicely integrated mechanically—individual modules pop into a mounting rail and can be installed and removed easily. There are several PLC systems on the market; we currently use the KOYO products.

Typically, a simple control program is developed in a proprietary cross-compiling environment (usually in the form of a relay “ladder diagram”, a method that dates back to days of electromechanical relays), and downloaded via a serial link. Typically such programs run under Windows, but they need be run only for development. After storing the control program in the flash memory, the microcontroller communicates with our Linux boxes, sending data and receiving commands via a built-in serial link.

Figure 2. One of our experimental stations, with the instrument control computer on the left, and two VME crates plus a PLC unit. Linux runs on the PC and in the controller of the lower VME crate.

Other Interfaces

The parallel port provides another popular computer interface. As Alessandro Rubini explained in “Networking with the Printer Port” (Linux Journal, March 1997), the parallel port is basically a digital I/O port, with 12 output bits and 4 input bits (actually, the recent enhanced parallel port implementations allow 12 input bits as well). To a dedicated hobbyist that is a precious resource, which can drive all kinds of devices (D/A and A/D converters, frequency synthesizers, etc); unfortunately, there is usually only one such port in a computer, and it tends to be inconveniently occupied by the printer. The serial port can also be used in a non-standard way; its status lines may be independently controlled and therefore provide several bits of digital I/O.

Such digital I/O can be used to “bit-bang” information to serial bus devices such as I2C microcontrollers. (I2C is a two-wire serial protocol used by some embedded microcontrollers, sometimes even available in consumer products.)

USB is another type of interface, appearing in terms of both available hardware and Linux support. Although designed for peripherals such as keyboards, mice or speakers, it is fast enough to be useful for some data-acquisition purposes, and some manufacturers have already announced future products in this area. One nice feature of USB is that a limited amount of power is available from the USB connector, thereby eliminating the need for extra power cables for external devices.

Network-Distributed Data Acquisition

With the decreasing cost of hardware and the flexibility afforded by Linux, we have been planning to use distributed hardware control. Instead of having one workstation linked to all the peripherals, we can deploy several stripped-down computers (hardware servers), equipped with a network card but no keyboard or monitor. Each of these would talk to a subset of hardware, executing commands sent by the main control workstation (controlling client) over the network. For the servers we can use either older, recycled 486-class machines, or even the industry standard PC-104 modules (a miniature PC format composed of stackable modules around 10 cm by 10 cm in size). In this case, the Ethernet becomes our real-world interface.

Scientific Visualization and Computations

Of course, we have also used Linux in the more traditional role as a general-purpose graphics workstation. Here, we are no longer limited to x86 architecture. Since we don't have to contend with hardware issues, we have a choice of several platforms, including Digital's Alpha-based computers. Currently (February 1998), it is possible to buy a 533MHz Alpha workstation for just over $2000 US; the prices seem to be going down, while the clock speed is going up into the reputed range of 800MHz. Alpha Linux is ready for serious computations at a very low price. Alpha is an excellent performer, especially for floating-point calculations—the SPEC benchmark shows an integer computation speed (SPECint95) of 15.7, and floating-point computation speed (SPECfp95) of 19.5 for a slightly slower 500MHz Alpha. By comparison, Pentium II at 233MHz exhibits SPECint95 of 9.49 and SPECfp95 of 6.43, one third the floating-point performance of the Alpha chip.

We have been using Linux-based PC stations since 1995. Often they simply serve as capable remote clients for our departmental computer servers, providing better X terminal functionality at prices lower than some commercial X terminals and for light local office computing. More and more, however, we have used them to perform local computations. An intriguing project we are currently considering is installing networked server processes for a certain kind of calculation often performed here (non-linear fitting) and distributing parallel calculations to servers which are not currently used by other clients. Since an average personal computer spends most of its cycles waiting for keystrokes from the user, we are planning to profitably use those free cycles.

This approach is, of course, inspired by the Beowulf cluster project, where ensembles of Linux boxes are interconnected by dedicated fast networks, to run massively parallel code. (See “I'm Not Going to Pay a Lot for This Supercomputer!” by Jim Hill, Michael Warren and Pat Goda, Linux Journal, January 1997.) There are several Beowulf installations in the Washington DC area including Donald Becker's original site at NASA, and the LOBOS cluster at National Institute of Health. In contrast, we plan to use non-dedicated hardware, connected by a general purpose network. We can get away with this because our situation does not require much inter-process communication.

Unfortunately, we don't have much space to discuss the scientific visualization; it is a fascinating field both scientifically and aesthetically; it involves modern 3-D graphics technologies, and some of the images are very pretty. Computer visualization is a new field, and consequently the development tools are as important as end-user applications. In our judgment, the best environment for graphics is provided by the OpenGL environment. OpenGL is a programming API designed by Silicon Graphics for their high-performance graphics systems, which is beginning to appear on Windows; on Linux it is supported by some commercial X servers. Also, Mesa is a free implementation of OpenGL that runs on top of X and makes OpenGL available on Linux. This involves a tradeoff—3-D graphics are significantly slower without the hardware assist provided by high-end graphics hardware and matching OpenGL implementation, but the X Window System's advantage of not being tied to any particular terminal is very significant.

Building on Mesa/OpenGL, there are many excellent visualization programs and toolkits, some referenced in the Mesa page. In particular, the VTK visualization widgets and their accompanying book are worth mentioning. Another application is GeomView, a generic engine for displaying geometric objects, written at the University of Minnesota's Geometry Center.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix