Linux on a Small Satellite

With less than a year to design and build a satellite, this team used existing sensor hardware, industry-standard parts, shell scripts and our favorite OS to make the project come together.
Distributed Development and Collaboration

The extensive use of TCP/IP-based systems and the common Linux operating system provided unique opportunities for a distributed development environment. Early in TacSat-1, our custom PowerPC 8260 development hardware had limited availability. The design cycle for much of the payload software began on Intel x86-based computer systems, migrated to generic PowerPC embedded processors and eventually made its way to the final target. The software design team was distributed spatially and tied together through a virtual private network (VPN) architecture. Remote power control devices allowed developers who were operating off-site to cycle power on hardware components. A Web-based collaboration tool allowed the posting and dissemination of critical communications and interconnection control documents (ICDs). Some developers also used instant messaging technology to stay in contact with one another. Recent additions to the collaborative working environment include the use of E-Log to maintain an on-line database of lessons learned. We also are working to integrate Bugzilla capability into the system to replace our relatively crude Message Forum-based problem report (PR) tracking.

The TCP/IP nature of the payload data network allowed developers to test communications between payload elements at each step in the design process, from developing on a standard PC to final communications before inserting the custom hardware required to communicate with the bus. Even after complete integration of the payload into the bus, an Ethernet test port allowed network access to the satellite, which was invaluable for collaborative debugging of the system. Test ports also allow access to serial consoles for most of the payload components and, in some cases, JTAG or other hardware debugging ports.

The payload software design team consisted of experienced satellite and ground station software experts, as well as team members accustomed to the TCP/IP data transport and Web/CGI application development, plus embedded systems experts. Although quite different from the typical satellite software design team, this combination provided nearly the perfect balance of skills and innovative methods to maximize the use of existing software designed for aircraft applications. The extensive remote collaboration, interface testing and networking capability provided a smooth bus-payload integration.

The core of the payload control software, including many of the command and control scripts, were developed in a span of less than four months, from start to finish. Additional scripts were inserted into the core payload control software infrastructure to bring on-line additional sensor capabilities as those sensors became available. New capabilities and patches may be uploaded to the satellite as requirements dictate.

Conclusion

Few satellite programs have the sponsor-supplied latitude or the ability to take risks that the TacSat-1 initiative provides. In this context, the TacSat-1 program allows innovative leveraging of both GOTS and COTS hardware components, as well as novel approaches to creating payload software that provides maximum flexibility and standards-based operation. The modular nature of the Copperfield-2 allowed rapid hardware integration, proving the concept of a modular payload that scales from UAV applications to a spacecraft application, all using Linux and GNU software as a foundation. At the time of this writing, TacSat-1 was scheduled to launch in February 2005.

Acknowledgements

The author acknowledges the essential contributions to the TacSat-1 effort by Stuart Nicholson, consultant, and Eric Karlin, Mike Steininger and Brian Davis of SGSS, Inc., for core Payload Control Software. Brian Micek of Titan Corp., Chris Gembaroski, Don Kremer, Tim Richmeyer and the Copperfield-2 team at Aeronix, Inc., Jeff Angielski of the PTR Group for Linux porting, device drivers and sensor support software. Also, thanks to Wolfgang Denx and the Linux PowerPC community for their contributions toward making PowerPC Linux stable and robust.

Resources for this article: /article/8066.

Christopher Huffine is an electronic engineer at the US Naval Research Laboratory, working for the Naval Center for Space Technology. He has been using Linux since college on various platforms, from desktop workstations to embedded control computers.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix