Improving Network-Based Industrial Automation Data Protocols

TCP/IP breaks into industrial automation, but not without some problems.

The industrial automation sector is rapidly advancing into the use of TCP/IP over Ethernet as a replacement for traditional data connectivity. Many of these devices implement application protocols that mirror their older cousins. With the implementation of IP, the operation, flexibility and reliability of these devices may be jeopardized due to oversights in the implementation of sockets as a new connection medium. In this article I will discuss many issues I've stumbled over while dealing with these issues. I'll also present solutions that future data protocols may improve on.

Background

The industrial automation sector started with direct-connected I/O devices. These local devices are connected directly to a peripheral bus of the CPU, such as an 8155 parallel I/O chip or even a memory mapped latch. Many large scale or expansive industrial systems, including petroleum facilities, water treatment plants and building environmental controls, require large number of monitoring and control points. Because of the electrical limitations of the CPU bus, remote I/O interfaces were devices that allowed a data protocol to address multiple I/O units. Serial interfaces (like RS-485) were used as the bus.

Serial interfaces are private and limited access interfaces. The simple serial bus does not provide the capability for transmitters to detect or resolve who owns the bus during a transmission. Hence, a single master transmitter originates all command messages, and the other slaves simply reply on requests to it. Other limitations of the serial bus included a limited length (several thousand feet or so) and a very limited bandwidth shared among many slave devices (effectively lowering the per slave bandwidth). These serial devices no longer had a memory-mapped architecture and relied on a data protocol that read and wrote information to them. The data protocol served as an abstraction of the memory map of the direct connected devices.

Due to the restrictions of bandwidth, some manufacturers created proprietary bus interfaces and others used nonmainstream, simple network interfaces (such as ARCnet). These interfaces allowed increased bandwidth, multiple master capabilities and practical expansiveness. But there was one simple problem: these interfaces and networks never entered the mainstream marketplace. As illustration, say a single manufacturer may support only its own I/O interface; the risk of the company terminating the product line or a device reaching its end-of-life doomed the future of the system. Unique cabling and networking systems had to be created just to support this hardware, which was expensive and required additional maintenance. Due to the small support marketplace, the amount of resources available to improve this technology was too small to carry it forward.

Then the automation sector started to create devices that used Ethernet connections. Ethernet provides transmitter resolution and has sufficient bandwidth and excellent expansion potential. Ethernet also is a mainstream interface, its the biggest strength. Some early implementations used raw Ethernet packets. Unfortunately, raw Ethernet isn't routable, at least not in the way IP can be. Therefore, all the devices in this raw Ethernet network must be of a common segment. Nonetheless, this is an effective method. Using Ethernet means an automation integrator could use an existing network infrastructure, cutting costs by not implementing another network and using an existing resource.

As you would expect, the industrial automation sector started implementing TCP/IP over Ethernet. This had all the advantages of raw Ethernet but with improved routing and nearly infinite expandability. The Ethernet market has huge amounts of funding and large numbers of competitors, yielding very high throughput yet affordable systems. As a result, network determinism is a manageable issue and not just the fear of a network designer. Today, nearly every computer has a TCP/IP stack. Clearly, neither TCP/IP nor Ethernet are one company's standards but perhaps one of the few real international standards.

However, there is a fundamental issue: most of the data protocols are borrowed or inherited from their older counterparts. Certainly, if an automation integrator uses a company's network, it's not private anymore. In addition, TCP/IP, UDP/IP and Ethernet are not quite like a simple serial data packet. This transition has created new problems and opened the door to new strengths.

The Traditional Communication Protocol

Here's a short refresher on the essence of the traditional communication protocol. This method is typically found in serial and several proprietary data interfaces.

The traditional serial data interface is connectionless; the polling master computer initiates all conversation. It sends a command that either writes or requests data from a slave and the slave responds. It assumes that the device exists by virtue of a response that arrives after a matching command.

No synchronization exists between the master and slave either. If the master gives up too early and begins transmitting a command, it may collide with a slave replying from the last command. The application managed the retransmission if a slave didn't respond or the slave responded with a corrupt reply. The point is the slave never responds unless it's requested to, it's passive. I've referred to this kind of protocol as the Marco-Polo (a childhood water game) technique.

The entire data interface supports a single data protocol. This is required because the slaves are listening for protocol-specific information. Other protocols transmitting on the bus may confuse the slaves.

Typically, these systems are repetitively polled. The polling rate may be hundreds of times a second or hundreds of seconds between polls. The polling rate depends on the response requirements of the control system.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

SCTP,

Anonymous's picture

I must say, I cant wait for SCTP[RFC 2960] to get in the main kernel.

-http://people.cs.uchicago.edu/~piggy/sctp_refcode/

-http://www.sctp.de

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix