Linux Out of the Real World
Through the National Aeronautics and Space Administration (NASA), the United States government provides space flight capability to its people; you can rent volume on a Space Shuttle mission and fly a payload into low Earth orbit. Because of the considerable cost involved, in practice, many of the organizations who rent space do so with government grants. One such grant belongs to Bioserve Space Technologies. Bioserve is a sponsored NASA Center for Space Commercialization, operating out of the University of Colorado at Boulder. Here, a group of students (from undergrad through post-doc) and teachers from many engineering disciplines work together to produce payloads that perform various experiments on the Shuttle.
This article describes one such payload, called the Plant Generic Bioprocessing Apparatus (PGBA), and the NASA systems used to communicate with the experiment.
PGBA is a Space Shuttle payload experiment designed to study plant growth and development in microgravity. It flew in the Space Shuttle Columbia, on flight STS-83 on April 4, 1997. The experiment is centered on a small hydroponics plant-growth chamber adapted for use in microgravity. The chamber is fitted with a large number of sensors and actuators, all connected to a 486 PC/104 computer running Linux. This computer monitors and controls a number of environmental conditions within the plant-growth chamber. The data produced is stored locally in the orbiter and transmitted to ground side over an unreliable bidirectional low-bandwidth link provided by NASA. A dedicated ISDN line connects the Marshall Space Flight Center (MSFC) in Huntsville, Alabama with our ground side support equipment in Boulder, Colorado. Here the biologists analyze the data, and we relay it over the Net to the Kennedy Space Center (KSC) in Cape Canaveral, Florida, where a ground-control replica of the experiment mimics the environmental conditions, “on Earth as it is in Heaven.”
The plan was to subject the experiment to several relocations within the orbiter after launch. PGBA was to be launched and powered on in the mid-deck. After two days in orbit it was to be moved to the SpaceLab module, where it would be mounted in the Express Rack and connected to the Rack Interface Computer (RIC) that provides both the uplink and the downlink. Two days before landing, it would be disconnected (cutting its communications with ground side) and moved back to the mid-deck. Each of these moves would require astronaut effort (shutting down, moving and bringing the experiment back up) and a loss of power to the experiment. We could have launched and landed right in the Express Rack, but the moving maneuver would allow NASA to test the techniques and hardware that will eventually be used to move experiment payloads between the Space Shuttle and the International Space Station.
Unfortunately, a hardware failure on the orbiter itself forced an early return after less than 4 days in orbit, instead of the planned 16 days. A fuel cell providing electrical power to the orbiter started to fail, and the mission was aborted to minimize risk to the crew. The fuel cell problem was discovered within the first two days in orbit, before PGBA was scheduled to be moved to the Express Rack. Four days in orbit was not enough time for the effects of microgravity on plant growth to manifest themselves, and from a science standpoint the experiment was considered a complete loss. However, it was not without value, since we now have a flight-tested and known working experiment. NASA is eager to test the Station transfer procedure, and the scientists are eager to get their data. A repeat flight has been tentatively scheduled for early July, 1997—same crew, same vehicle, same payloads, just a new tank of fuel.
I will describe the payload we designed and the mission we originally planned (the same one we are expecting to complete in July) rather than the aborted mission that we actually flew.
How do you design a computer system to handle this situation? Clearly it is a mission-critical item. If the computer fails, the experiment is lost.
Astronaut time is an incredibly expensive commodity. This has two implications: it is desirable to automate normal operation of the payload as much as possible and not to require maintenance or repair in orbit.
The computer system must operate autonomously for the duration of the mission (on the order of two or three weeks). During this time it monitors and controls the conditions inside the growth chamber, using an array of specialized sensors and actuators. It must also communicate with ground side, both accepting input and providing output. Physically, the computer must occupy a small volume.
Data produced before the move to the Express Rack and after the move back to the mid-deck would need to be buffered on non-volatile storage. Just before the move to the Express Rack and just before we get the payload back after launch, we would need to buffer a maximum two days worth of data.
The solution we decided on is a PC/104 computer running Linux. PC/104 is an “embeddable” (90 by 96 mm, low power consumption) implementation of the common PC/AT architecture. PC/104 hardware is software compatible with ISA hardware, but the connectors and layout are different. This has obvious advantages: all the software that runs on vanilla desktop PCs runs unmodified on PC/104 computers. (Incidentally, the PC/104 Consortium just announced the PC/104-Plus spec, which describes an extension to the regular PC/104 architecture that is software compatible with PCI. For more information on the PC/104 standard, see http://www.controlled.com/pc104/consp1.html.)
We chose Linux for “soft” reasons. The job could be done in MS Windows, on a microcontroller or on a Turing machine, but who would want to? The tools and computing environment available to programmers in the more advanced operating systems make life so much nicer.
Last year on STS-77 we flew two payloads with similar computer systems. This year we used Linux, last year we used DOS. The DOS software worked and was functionally almost equivalent to the Linux version. Notably, it lacked image capture, downlink of images and local storage of data logs. It was switched because the DOS version was monolithic, more difficult to understand, debug and expand, and it was difficult to reuse the code.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- The Qt Company's Qt Start-Up
- Devuan Beta Release
- May 2016 Issue of Linux Journal
- The US Government and Open-Source Software
- EnterpriseDB's EDB Postgres Advanced Server and EDB Postgres Enterprise Manager
- Open-Source Project Secretly Funded by CIA
- The Humble Hacker?
- BitTorrent Inc.'s Sync
- The Death of RoboVM
- Tech Tip: Really Simple HTTP Server with Python
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide