Improving Network-Based Industrial Automation Data Protocols

TCP/IP breaks into industrial automation, but not without some problems.

The industrial automation sector is rapidly advancing into the use of TCP/IP over Ethernet as a replacement for traditional data connectivity. Many of these devices implement application protocols that mirror their older cousins. With the implementation of IP, the operation, flexibility and reliability of these devices may be jeopardized due to oversights in the implementation of sockets as a new connection medium. In this article I will discuss many issues I've stumbled over while dealing with these issues. I'll also present solutions that future data protocols may improve on.

Background

The industrial automation sector started with direct-connected I/O devices. These local devices are connected directly to a peripheral bus of the CPU, such as an 8155 parallel I/O chip or even a memory mapped latch. Many large scale or expansive industrial systems, including petroleum facilities, water treatment plants and building environmental controls, require large number of monitoring and control points. Because of the electrical limitations of the CPU bus, remote I/O interfaces were devices that allowed a data protocol to address multiple I/O units. Serial interfaces (like RS-485) were used as the bus.

Serial interfaces are private and limited access interfaces. The simple serial bus does not provide the capability for transmitters to detect or resolve who owns the bus during a transmission. Hence, a single master transmitter originates all command messages, and the other slaves simply reply on requests to it. Other limitations of the serial bus included a limited length (several thousand feet or so) and a very limited bandwidth shared among many slave devices (effectively lowering the per slave bandwidth). These serial devices no longer had a memory-mapped architecture and relied on a data protocol that read and wrote information to them. The data protocol served as an abstraction of the memory map of the direct connected devices.

Due to the restrictions of bandwidth, some manufacturers created proprietary bus interfaces and others used nonmainstream, simple network interfaces (such as ARCnet). These interfaces allowed increased bandwidth, multiple master capabilities and practical expansiveness. But there was one simple problem: these interfaces and networks never entered the mainstream marketplace. As illustration, say a single manufacturer may support only its own I/O interface; the risk of the company terminating the product line or a device reaching its end-of-life doomed the future of the system. Unique cabling and networking systems had to be created just to support this hardware, which was expensive and required additional maintenance. Due to the small support marketplace, the amount of resources available to improve this technology was too small to carry it forward.

Then the automation sector started to create devices that used Ethernet connections. Ethernet provides transmitter resolution and has sufficient bandwidth and excellent expansion potential. Ethernet also is a mainstream interface, its the biggest strength. Some early implementations used raw Ethernet packets. Unfortunately, raw Ethernet isn't routable, at least not in the way IP can be. Therefore, all the devices in this raw Ethernet network must be of a common segment. Nonetheless, this is an effective method. Using Ethernet means an automation integrator could use an existing network infrastructure, cutting costs by not implementing another network and using an existing resource.

As you would expect, the industrial automation sector started implementing TCP/IP over Ethernet. This had all the advantages of raw Ethernet but with improved routing and nearly infinite expandability. The Ethernet market has huge amounts of funding and large numbers of competitors, yielding very high throughput yet affordable systems. As a result, network determinism is a manageable issue and not just the fear of a network designer. Today, nearly every computer has a TCP/IP stack. Clearly, neither TCP/IP nor Ethernet are one company's standards but perhaps one of the few real international standards.

However, there is a fundamental issue: most of the data protocols are borrowed or inherited from their older counterparts. Certainly, if an automation integrator uses a company's network, it's not private anymore. In addition, TCP/IP, UDP/IP and Ethernet are not quite like a simple serial data packet. This transition has created new problems and opened the door to new strengths.

The Traditional Communication Protocol

Here's a short refresher on the essence of the traditional communication protocol. This method is typically found in serial and several proprietary data interfaces.

The traditional serial data interface is connectionless; the polling master computer initiates all conversation. It sends a command that either writes or requests data from a slave and the slave responds. It assumes that the device exists by virtue of a response that arrives after a matching command.

No synchronization exists between the master and slave either. If the master gives up too early and begins transmitting a command, it may collide with a slave replying from the last command. The application managed the retransmission if a slave didn't respond or the slave responded with a corrupt reply. The point is the slave never responds unless it's requested to, it's passive. I've referred to this kind of protocol as the Marco-Polo (a childhood water game) technique.

The entire data interface supports a single data protocol. This is required because the slaves are listening for protocol-specific information. Other protocols transmitting on the bus may confuse the slaves.

Typically, these systems are repetitively polled. The polling rate may be hundreds of times a second or hundreds of seconds between polls. The polling rate depends on the response requirements of the control system.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

SCTP,

Anonymous's picture

I must say, I cant wait for SCTP[RFC 2960] to get in the main kernel.

-http://people.cs.uchicago.edu/~piggy/sctp_refcode/

-http://www.sctp.de

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState