Linux in an Embedded Communications Gateway
In 1996, Pacific Gas & Electric Company (PG&E) began a project to develop a commercial-grade advanced sensor to monitor electric lines. The system was simple in concept: use lots of small cheap units that hang on the electric lines, each consisting of a few sensor elements, an embedded CPU, a spread-spectrum radio and a battery. The sensor units would monitor for certain line conditions; if those conditions occurred, the sensor would conduct measurements, and then radio the information to a “master” radio unit. That master unit would, in turn, forward the information through a gateway to a server on the corporate WAN.
Server software would analyze the arriving data and, if warning conditions existed, would alert the electrical system operators. Other software on the WAN would be capable of monitoring and performing remote configuration of the wireless sensor network. This software would use a proprietary wireless networking protocol to accomplish the control and monitoring—in effect, tunneling this protocol within TCP/IP out through the gateway system and back.
The project was on an aggressive timetable: to put a working demonstration in the field before winter storms began in the fall of 1996. Luckily, the underlying sensor technology had already been developed by the PG&E Research and Development Lab. PG&E partnered with an outside vendor to develop, test and bring to market a wireless network of sensors based on this sensor technology. The total system was to consist of three major parts: the sensor network, the client software to use the data collected by the sensors and a gateway providing a reliable link. Linux was chosen as the operating system for the gateway.
The goal of the project was to develop and test sensor technology and its associated wireless network. R&D had pioneered the actual sensor design and had conducted field tests of prototype sensors. To move the technology from prototype to product, we had to design for manufacturing—not something with which a utility company has experience. In addition, developing and testing a wireless spread-spectrum radio network was beyond our abilities. Therefore, we partnered with a commercial vendor to help develop and produce the system.
The vendor was to integrate our sensor designs with their radio and to implement the radio network. We would provide full technical assistance and handle all data collection and analysis for the initial trials. A fairly substantial software development effort would also be necessary. The client-side display software was based on other, related software development efforts, so that part of the project had a head start. I was a member of the project development team in R&D and was responsible for the actual development of the gateway software.
The wireless sensor network used spread-spectrum radios. These unlicensed radios are relatively immune to interference, but require a clear line of sight between transmitter and receiver. The sensors were designed to hang from power lines. To provide a line of sight to those sensors and good geographical coverage from a single master radio, the radio site must be above the level of the power lines. Thus, the master radio must be on a hilltop or mounted on a tower to be effective. Either solution poses problems in getting power and a WAN connection to interface with the master radio. To add to the challenge, a given site might require more than one master radio, depending on geography. To “look” over a knoll or around a building, we might have to add a second master radio. Even in those circumstances, we wanted to use only one gateway system; therefore, each gateway must be able to serve several master radios.
The field trial site for the project was chosen for its interesting electrical distribution circuits, topography typical of our electrical service area and a relatively accessible site for a master radio. The site was several hours away from our office location at the top of a large, steep hill. The road up the hill was treacherous even when conditions were dry; in wet winter months, reaching our equipment location at the top of the hill would be very difficult. Given the difficulties of physically reaching the gateway system, it was imperative that it be reliable and fault-tolerant. The site already had a backup electrical generator protecting other radio equipment located on the hilltop, which significantly simplified our power requirements for the gateway system and master radio. A small UPS added ride-through capability to protect the systems during voltage sags, and while waiting for the backup generator to come on-line in the event of an outage.
The master radio was custom-developed by the wireless vendor, but its link to the gateway was limited to a single serial communications port. No embedded Ethernet or other networking interface would be included in the radio. The gateway system had all networking responsibility. To further complicate the matter, radio antenna requirements forced the master radio to be located outside, high up on a mast. The computer system was to be located inside a nearby building. The serial communications cable between the master radio and the computer system would have to extend about 50 feet. This eliminated standard RS-232 communications, since at those cable lengths we would never attain the design goal of 56 baud. Also, normal RS-232 communications would not allow us to have multiple master radios per gateway without adding more serial ports. To solve the problem, we chose to use RS-485 two-wire differential communications, which gave us distance, high baud rates, excellent noise immunity and the ability to expand the link to multiple drops (supporting multiple radios) if needed.
The gateway would require additional functionality for the field trials. To properly evaluate the data gathered from the sensors, data had to be recorded somewhere. The system specifications called for data to be delivered in real time to a central server that would record it, perform data analysis and, if necessary, send an alarm to the electrical system operators. Any failure of the links from the gateway to the WAN or a WAN outage would break the entire process and risk data loss. In fact, the times when the most interesting data is collected—during storms—is the very time links are most likely to fail. Obviously, the gateway must have redundant links and the ability to log sensor data locally, should all links fail. The hilltop had a direct line of sight to a local PG&E office and a few spare telephone wire pairs. For the primary link, we used a commercial spread-spectrum, point-to-point radio bridge. (Similar radios were already in wide use in the company as WAN links.)
Our backup link was ISDN. We chose to use a commercial bridge to provide automatic switching between the radio and the ISDN links. While we could have added that capability in our software, we chose to buy it “off-the-shelf”, believing the extra cost was worth the reduction in development time. To provide additional redundancy, we added a standard dial-up line. This would provide an alternate path for retrieving data should a WAN outage occur, severing the link on the network side. As a complete backup, the gateway would also log all network traffic from the master radio. Those logs would be kept on the system's hard drive, since we expected to repair any link failure before recording many megabytes of data. Of course, we still might have to physically go to the gateway and perform a manual download, but we wouldn't lose data.
As discussed above, the master radio communicated to the gateway across an RS-485 link. This link used a simple protocol (similar to HDLC, high-level data link control) to packetize the proprietary radio network protocol and allow for multiple master radios when required. The gateway would take the data stream from the RS-485 link, encapsulate it into a TCP/IP socket connection and flow it onto the WAN. Data flowing in the other direction would go back out the RS-485 link to the master radio. On the WAN, a simple database server would be at the other end point of the protocol tunnel. The software used by the electrical system operators would interact only with the server. This modularization was key to dividing the work among the disparate groups participating in the development process.
While designing the system, we always had an eye on scalability. Each WAN-based server needed to be capable of supporting multiple wireless network gateways. Successful completion of the project would mean rolling out several hundred wireless networks, leading to large database requirements. In addition, the server software would need to run on hardware not specifically purchased for this project. All PG&E divisions had large Sun servers in place for support of other engineering functions. From both a financial and a space perspective, the idea of buying and installing new servers just for the wireless sensor network was rejected. Our new server software had to be portable across major UNIX platforms—at a minimum, SunOS and Solaris. Recall that the server was to be an end point for the protocol tunnel carrying the proprietary radio network data. The code we wrote for the server had to be portable to the gateway as well; writing two versions was not only silly, but also out of the question given our short development timetable.
All in all, the bridging system and its associated components had quite a list of requirements, including the following:
Speak RS-485 on one side and Ethernet on the other.
Support tunneling the radio network protocol across a serial communications channel and within TCP/IP.
Support a primary, a backup and a dial-up link.
Support automatic data logging locally if those links failed.
Support the ability to extract logs over the network without disrupting the gateway operations when links were restored and without disrupting normal operations.
Be as reliable as possible to operate in a remote, occasionally inaccessible location.
The server software must be portable across various flavors of UNIX and reuse as much of the gateway code as possible. To make matters more exciting, the entire system had to be built, tested and installed in the field in about six months.
For much of the system, we chose to use off-the-shelf technology in order to minimize development time and maximize reliability. However, to our knowledge, this type of project had never been done before, and most of the special functionality would have to be written by us. Without a stable operating system with a reasonable set of features and tools, we'd never be able to meet the deadline.
- Smoothwall Express
- Machine Learning Everywhere
- Bash Shell Script: Building a Better March Madness Bracket
- Own Your DNS Data
- Simple Server Hardening
- Understanding OpenStack's Success
- From vs. to + for Microsoft and Linux
- Understanding Firewalld in Multi-Zone Configurations
- Returning Values from Bash Functions
- The Weather Outside Is Frightful (Or Is It?)