Linux at the University
Linux was chosen as the operating system for the payloads onboard the Shuttle (and soon to be International Space Station) missions for several reasons. First and foremost is its high reliability. In several years of developing different payloads, we have never encountered a problem that was operating system related. Other reasons for use of the Linux OS include:
Ease of custom kernel configuration, which enables a small kernel footprint optimized for the task at hand;
The ability to write kernel-space custom device drivers for interfacing to required peripheral hardware such as analog and digital I/O boards or framegrabbers;
The ability to optimize the required application packages resident on payload non-volatile storage media;
The availability of a powerful software development environment;
Availability of the standard UNIX-like conventions of interprocess communication (IPC) methodologies, shell scripting and multi-threaded control processes.
With the recent advances and capabilities of real-time variants of the Linux OS, critical real-time process scheduling is available (if needed) for incorporation into the software/hardware control architecture.
Because of the limited volume of these complex payloads, it is necessary to minimize the volume of every hardware item, including the communications and process control computer. In our payloads, we use a combination of small single board computers (SBC) and PC-104 form factor boards to supplement the capability of the SBC. In particular, we have had great success and outstanding reliability from Ampro, Diamond Systems, SanDisk and Ajeco products.
The computer hardware architecture we have designed and developed is generic in that the same architecture is used for all of the different payloads. The single board computer provides the CPU, FPU, video interfaces, IDE and SCSI device interfaces, along with the serial and Ethernet communications interfaces (see Figures 11 and 12).
The inside of the control computer assembly, with a single board computer and stack-on PC-104 module, is pictured in Figure 11. The unit is 6 feet wide and 2.5 feet deep. Figure 12 shows the outside front of the control computer assembly with the keypad and mini-VGA attached. Analog and digital IO capabilities, necessary for interfacing to control hardware and sensors, are implemented via an additional PC-104 module. Imaging of the experiments within the payloads is accomplished via multiple cameras interfaced to a PC-104 frame-grabber. This generic hardware architecture allows for the integration of a complex array of sensors and control actuators. Additionally, distributed processing is implemented by attaching micro-controllers over a serial RS485 multi-drop communications bus, which facilitates the addition of new control hardware in a modular manner.
One tremendous advantage provided by the true multi-processing Linux environment is the relative ease of software development as compared to monolithic DOS systems. A similar generic, multi-threaded software control architecture is used for process control and communications for all of the projects described in this article and shown in Figure 13.
The software architecture follows the multi-threaded peer model. At start-up, the thread responsible for initialization of the entire system first spawns the processes required for payload-to-groundside communications. Next, shared resources are requested and initialized. Communication and synchronization control variables are initialized, and then a separate thread is created corresponding to the different services required from the payload. For our payloads, these services are categorized as either primary or secondary services. Primary services include those tasks which are required for the fundamental control of the payloads. Primary services for the different payloads tend to be identical in functionality, but the methods of hardware implementation can differ greatly. Secondary services are those specific to a particular payload.
From the development standpoint, the use of threads in a proven multi-process architecture facilitates multi-programmer development, module reuse and a much faster test-and-debug cycle, since processing of individual threads is easily enabled or disabled. This allows for complete testing of a single subsystem, say the chamber temperature control, in a stand-alone manner. Once completely tested, the subsystem is incorporated into the overall payload control architecture, thus minimizing adverse effects of integration.
The communications system is implemented in a modular manner as well. Data transmission is abstracted at the programming interface level as different messages (e.g., analog data, health and status messages, images, etc.). These different length messages are termed “channels”. A thread that wishes to transmit a message to ground stations, queues (writes) a dynamic message of pre-defined length to a specified channel (see Figure 14). The channeling system prepends channel routing information and a packet checksum. Multiple channel messages are combined into a secondary packet to maximize usage of available bandwidth. This secondary packet is termed a “RIC” packet; it serves as an interface abstraction to the Rack Interface Computer which is used onboard the International Space Station to communicate to the different payloads. RIC packet contents are also sanity checked via an additional checksum. Finally, the RIC packet is encapsulated into a TCP/IP packet for delivery to the payload.
Communication between the RIC and the payloads is TCP/IP based for reliable communications. Unfortunately, telemetry data transmitted from the payload to the RIC ends up being distributed on the ground from the Marshall Space Flight Center to the remote users via UDP, which is unreliable. In addition, the physical communications link in the form of the TDRSS satellite network is subject to outages as well. To minimize the potential impact of dropping packets and loss of signal caused by holes in the available TDRSS satellite network coverage, a “playback” system has been implemented that retransmits critical information to improve the probability that a complete set of telemetry data files is received properly by the ground stations.
Finally, one of the channels has been configured to serve as a remote shell on the payload. Input to and output from the bash shell is first redirected to a channel and then packetized as described above. In this manner a true remote shell operates on the actual payload in a manner that is functionally equivalent to establishing a remote connection via TELNET or ssh. The bash shell input and output is “tunneled” through the NASA communications scheme and allows the possibility of remote system administration activities from the ground. Just try and implement that capability under Windows! For further information concerning the design of the communications subsystem, please see the excellent article “Linux Out of the Real World”, by Sebastian Kuzminsky, which appeared in Linux Journal in July of 1997.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Back to Backups
- A New Version of Rust Hits the Streets
- Google's Abacus Project: It's All about Trust
- Secure Desktops with Qubes: Introduction
- Seeing Red and Getting Sleep
- Fancy Tricks for Changing Numeric Base
- Secure Desktops with Qubes: Installation
- Working with Command Arguments
- CentOS 6.8 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide