Virginia Power Update
If you're a longtime Linux Journal reader, you've probably read many exciting tales of Linux's success in the real world. Perhaps, after a relaxing evening hacking a device driver, or building a new kernel, or trying to memorize the Emacs commands to change fonts in a LaTeX buffer, you've sat on the edge of the bed, hand on the light switch and mused to yourself, “I wonder what those Virginia Power guys are up to these days?”
“What's that, dear?” your Significant Other murmurs sleepily (or, depending on your lifestyle, woofs or meows from the foot of the bed).
“You know, those intrepid fellows who used Linux to build a distributed data collection and archiving system (Linux Journal #9, January 1995), and a dial-up SCADA (supervisory control and data acquisition) system to interface to their existing SCADA system (Linux Journal #10, February 1995).”
“Oh—those Virginia Power guys ...”
“Yes. I wonder if they're still using Linux, if it still meets their needs, and if they're doing anything new and exciting ... ”
Well, as one of those Virginia Power guys, I can say quite happily: are we ever! I guess we've been so busy coding these past couple of years that we haven't had time to document what we've been up to, but I'll try to make up for that lack (at least a little) in this article. I'll describe two new applications in which we're using Linux—one already complete, installed and working wonderfully, and the other which, when completed, will be one of the largest computer systems of any type at Virginia Power, and possibly one of the largest systems using Linux anywhere.
The first application I'll discuss is a natural outgrowth of our earlier Linux applications—especially our dial-up SCADA system. I won't recount all the details of that system (see LJ #10 if you're interested)—the important thing to remember is that the dial-up system retrieves status and analog data from and sends control commands to remote devices installed in substations and on pole tops.
In larger substations, a device called a remote terminal unit (RTU) is usually employed as the primary remote device. The RTU serves as a data concentrator, collecting status and analog data from various other devices in the station and providing a central location from which the data can be retrieved remotely and to which control requests can be delivered.
RTUs are somewhat limited in processing power and expandability, and usually only interface with a limited number of other monitoring and control devices. As a result, one of the current trends in substation design is employing a computer as the station data concentrator. This approach allows not only greater flexibility as far as the types and numbers of devices which can be connected, but also provides a general-purpose software environment wherein some monitoring and control algorithms can be executed locally within the substation.
As you might expect, substation controller systems are commercially available, many implemented using MS Windows. However, these systems are expensive, difficult to administer remotely and contain a plethora of software gingerbread and geegaws (also known as click-and-drag disease) having nothing to do with monitoring devices in a substation.
In the Operations Engineering group, we realized that our dial-up SCADA system (see Figure 1) provided the same functionality as those commercial substation controllers, if you eliminated the dial-up system and moved the PC and the translator device out into the substation (see Figure 2). The translator device is necessary to communicate with our existing SCADA Master control computers using an ancient bit-oriented protocol (over 1200bps leased lines, no less). However, these control computers are due to be replaced with a new computer system over the next couple of years—more on that later.
To cut to the chase, moving PCs to the substations is exactly what we did. At the time of writing this article, half a dozen Linux-based substation controllers are installed and working around the clock, with more in the planning stages.
The basic design of the substation controller is pretty straightforward. For each type of special equipment which performs the actual monitoring and control in the substation, a specific protocol task is written (for compulsive coders, this is the fun part) which handles all the details of data retrieval and control execution. The devices which perform the actual monitoring and control are usually referred to as IEDs (Intelligent End Devices), since they have a certain amount of intelligence built into them and sit at the end of the data retrieval and control path. Some IEDs communicate via serial lines; others use specialized local area network protocols such as ModBus+.
Another protocol task communicates with the translator device, which in turn communicates with the SCADA master computer. A database management daemon coordinates the activities of all these protocol tasks and also maintains shared memory partitions which contain the actual data. With all of these components taken into account, a typical substation controller setup looks like the one in Figure 3.
In most cases, the substation controller computer is rack-mounted along with the rest of the equipment in the substation “control house” (a small building intended mainly for protection from the elements). It has no keyboard and no monitor (try that with your Windows box). All system administration is performed remotely via dial-up login; database and program updates are distributed using UUCP. In some cases, the installation includes a touch-screen monitor to provide a local operator interface, consisting of an annunciator panel (see Figures 4 and 5) and even an interface for performing device controls (see Figure 6).
By the way, an annunciator panel is just a hierarchical display of alarm points, in which an alarm at a higher level means one or more points at a lower level are in an alarm state. The annunciator panel initially displays the topmost level of alarms as a grid of labeled boxes (green if all lower alarms are normal, red if at least one is in alarm). A substation technician can touch one of the boxes to display the next lower level of alarms; each one of those alarms can consist of additional individual alarms, etc.
These substation controller systems are highly flexible. If more serial ports are needed to talk to additional devices, we need only augment the serial multiport card or add another multiport card. If new types of devices are to be connected, we need only write a new protocol task and possibly a new device driver for an interface card.
How reliable are these systems? All of our substation controller systems have functioned extremely well from the moment they've been powered up in the field. We've had a few application software bugs and glitches, but the system software has never failed nor caused us a single problem through thousands of hours of continuous uptime.
I stopped worrying long ago about the robustness of Linux—to be honest, Linux and its associated tools and support software from the Free Software Foundation and elsewhere comprise the most reliable system I have ever used on any hardware platform. I've heard horror tales from users of other operating systems: blue death screens, exhausted resource limits, quirky compiler bugs and so on. I shake my head in rueful irony, then swivel my chair back to my Linux box to get some work done. I can't imagine putting any other system out in the middle of the wild and woolly real world and expecting it to run for weeks and months and years without fail.
As far as cost-effectiveness goes, new controller systems generally cost only as much as the hardware, with sometimes a little software overhead if a new protocol or device driver needs to be developed. No commercial system can hope to come even close to that low a cost—thanks to the freely distributable nature of Linux. Since our Operations Engineering group is responsible for finding cost-effective solutions for power system monitoring and distribution needs, Linux is just this side of a miracle. In a very real (albeit small) sense, Linux helps us keep down the cost of distributing electricity. With every new substation controller installed, customers who've never even heard of Linux can benefit from all the hard work and loving craftsmanship the Linux developers and maintainers have invested in their system.
These two important points—reliability and low cost—lead naturally into a discussion of our other big Linux project: a replacement system for the current network of SCADA Master computers. The SCADA Master computers are the systems which scan all of the RTUs and IEDs mentioned above (several hundred at last count). These computers monitor power system conditions, generate alarms when abnormal conditions are detected, and provide system operators with summaries of power grid information, one-line diagrams of substation layouts and control interfaces for remotely operating breakers, capacitors and other field devices. The systems also run closed-loop feedback control programs, which automatically respond (usually via device controls) to changing system conditions.
Currently, the SCADA Master computers are a network of six PDP-11/84 computers which have just about reached the end of their usefulness—they've reached their limits for CPU power, installable RAM and so on. The user interface is a creaky mixture of specialized keyboards with banks of function keys and a character-based graphics terminal with unchangeable little symbols and line segments for drawing substation one-line diagrams. All of these features were quite new and progressive in the early 1980s, but are far from flexible enough for the present or foreseeable future.
As with our substation controllers, we went the commercial route first when we started looking for a new SCADA system. We reviewed the offerings of about a dozen vendors. Alas, since a SCADA system is similar to a factory automation system or even an aircraft simulator, many systems we reviewed were derived from these types of systems. As a result, they contained many features and add-ons which made no sense for the way our operators used SCADA; plus, they were expensive; plus, we usually couldn't get access to source code. (We've been spoiled by Linux.) At least one vendor offered to put the source code in escrow—about as useful as being told how delicious a chocolate cheesecake is without being offered a slice.
Not least of all was the issue of retraining our operations personnel (not to mention ourselves) on a completely new system. Monitoring the electric distribution grid is a 24-hour-a-day job, so we couldn't just shut down shop for a couple of months while we came up to speed on a new boatload of software. After all, our goal was to reliably monitor and control the power system, not necessarily to learn an entirely new way of opening a circuit breaker or logging an alarm. (Similar to click-and-drag disease is the curious notion that a new or different way of performing a task is automatically a better way.)
We thought long and hard about what we really needed: a cost-effective, flexible, scalable, reliable SCADA system replacement that wouldn't require extensive retraining for ourselves or our system operators, and that wouldn't include extra software gadgets for which we would have no use.
Meanwhile, our Linux systems continued to run quietly day in and day out: performing dial-up data retrieval, monitoring scores of devices in substations. The substation controllers, in particular, were almost embryonic SCADA systems, with data retrieval, database storage and archiving, and a user interface with one-line schematics and controls.
Of course, several important pieces were missing from the substation controllers which would have to be supplied to turn them into a full-fledged SCADA system—but after you've spent enough time in Emacs, you tend to think you can accomplish anything. So we examined issues of scalability, figured out exactly what extra pieces we needed and whether we could develop them on a realistic schedule, and put together a presentation for our management.
Surprisingly (or perhaps not so surprisingly, given our track record with the other cost-effective Linux systems), our upper-level management gave us the green light. Suddenly we had plenty of work to do, with an implementation target date of December, 1998.
I can't adequately describe how exciting (and terrifying, in some respects) this project is. Some of the features of our design:
A private high-speed (100 Mbps) wide area network connecting all our main machines, separated from our corporate network by a firewall
500 MHz DEC Alpha database servers
High-speed Intel front-end processors to handle RTU scanning and database retrieval
Multi-headed operator workstations running X
A distributed shared-memory database to transparently share information among all servers and workstations
Three regional operating centers, a duplicate center at our Grayland Avenue office, scores of workstations, and many district centers (some connecting via Ethernet, others on demand via PPP/SLIP)
Everything running Linux
A general overview of our system, which we've named swSCADA for SkunkWerx SCADA (smile) is shown in Figure 7.
By the time our implementation is finished (as mentioned above, our installation begins in December, 1998 with a new system in the Eastern/Southern regional center, with the remainder of the centers being phased in by the end of 1999), we will have a network of around four dozen Alpha and Intel boxes, running 24 hours a day, 7 days a week. This probably isn't the largest network using Linux systems exclusively, but it certainly puts the lie to those armchair critics who claim large corporations are unwilling to use Linux in mission-critical situations. Monitoring the electric power distribution grid is a mission-critical situation for a power company, and not only are we willing to use Linux, we embrace it wholeheartedly (and admittedly, somewhat evangelically). Its robust quality and freely distributable nature will save our customers money, provide them with top-drawer service, and give our shareholders more of a return on their investment. In today's bottom-line business environment, those sorts of arguments matter.
I was thinking the other day about Linus Torvalds' ultimate goal for Linux: “world domination”. As we enter the 21st century, every light, CD player, television, toaster, PC and hair dryer in central Virginia and part of North Carolina will be under the benevolent, watchful eye of vigilant Linux swSCADA systems. That's not world domination yet—but it's a start!
|PostgreSQL, the NoSQL Database||Jan 29, 2015|
|HPC Cluster Grant Accepting Applications!||Jan 28, 2015|
|Sharing Admin Privileges for Many Hosts Securely||Jan 28, 2015|
|Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform||Jan 23, 2015|
|Designing with Linux||Jan 22, 2015|
|Wondershaper—QOS in a Pinch||Jan 21, 2015|
- PostgreSQL, the NoSQL Database
- Sharing Admin Privileges for Many Hosts Securely
- HPC Cluster Grant Accepting Applications!
- Internet of Things Blows Away CES, and it May Be Hunting for YOU Next
- Designing with Linux
- Wondershaper—QOS in a Pinch
- Ideal Backups with zbackup
- Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform
- Slow System? iotop Is Your Friend
- January 2015 Issue of Linux Journal: Security
Editorial Advisory Panel
Thank you to our 2014 Editorial Advisors!
- Jeff Parent
- Brad Baillio
- Nick Baronian
- Steve Case
- Chadalavada Kalyana
- Caleb Cullen
- Keir Davis
- Michael Eager
- Nick Faltys
- Dennis Frey
- Philip Jacob
- Jay Kruizenga
- Steve Marquez
- Dave McAllister
- Craig Oda
- Mike Roberts
- Chris Stark
- Patrick Swartz
- David Lynch
- Alicia Gibb
- Thomas Quinlan
- Carson McDonald
- Kristen Shoemaker
- Charnell Luchich
- James Walker
- Victor Gregorio
- Hari Boukis
- Brian Conner
- David Lane