Improving Network-Based Industrial Automation Data Protocols
TCP and UDP are very dissimilar data protocols. To review, TCP is connection-oriented, stream-oriented, with guaranteed delivery and guaranteed order. UDP carries unreliable packets of a limited length, it's connectionless, packet-oriented and arrival order is not guaranteed. Routers may be configured to drop UDP packets to unknown ports (fairly unlikely these days).
The traditional serial data bus is more like UDP in its lack of delivery guarantee and its connectionless. The arrival order, however, is something to cause concern. Also, the packet orientation of the received data may cause excess data to be dropped if the receiving buffer isn't large enough.
TCP is similar to the traditional bus in that data arrives in guaranteed order and appears in a stream-oriented fashion. TCP differs in that each socket is a private virtual connection to each automation device, and guaranteed delivery means the IP stack will retransmit the message if it isn't acknowledged by the receiver.
Between UDP and TCP, other than a shared ability to transmit and receive data, there isn't a lot of similarity between them. This may cause problems when trying to create a data protocol common to both of these IP protocols.
Returning to UDP, let's make an assumption that each UDP command and response stays below a network MTU. This would guarantee that the data is encapsulated in one packet. If two packets arrived at the receiver, the receiver API will not concatenate the two packets together. Rather, the first packet would be received and the second packet would move on to the next receiver. But, should the application request less data than is contained in the packet, the remaining data will be dropped. In the UDP model, it is possible for the receiver to know the beginning and end of a data segment.
Furthermore, due to the lack of sequence preservation, the first packet may have been the second packet sent. In UDP, if the data protocol can have multiple and simultaneous data packets sent and received, an application layer sequence ID is required to allow re-ordering of the data packets.
The connectionless nature of UDP means that if the automation device was power cycled, the polling master may not be able to tell it occurred. This is no different than traditional serial devices.
TCP has a stream model: if two packets arrived and there isn't anything in the data stream to indicate where the packets split, the application won't be able to discern the first packet from the second packet. Some kind of data size is required to allow the receiver to determine the beginning and end of the information in the stream.
TCP is also connection-oriented. Should the automation device be power cycled, the TCP connections are reset. The polling master would then receive a socket error if it attempted to transmit to a device that doesn't have an established connection.
Another consequence of the connection-orientation is the connection must be gracefully closed. This graceful closure requires data packets to be transferred between the polling host and the automation device. Should the polling master close the connection and is some kind of infrastructure failure occurred, the automation device may be left with a permanently open TCP session (a resource leak). If this happens too many times, the automation device may run out of resources.
TCP guarantees arrival, which sounds like a great thing. The dreadnought nature of TCP has one giant Achilles heel--time. TCP generally makes three transmission attempts to receive an acknowledgement from the receiver. The first timeout is short, the next is longer and the last is eternity; most automation scanners won't wait this long. In addition, failed infrastructure, power cycling or a cable removal could cause this problem, which might also tie up resources. Because many automation devices use IP stacks of limited resources, carelessly creating (or leaking) TCP resources may exhaust the automation device's IP resources.
Because traditional serial data streams had high transmission latency, the data packets were kept as small as possible (typically, well under 256 bytes). This small data size only allows a single data write or a data read request to be sent at a time. With significantly larger MTUs, data packets as large a 1,400 bytes could be handled, including a read and a write request simultaneously. This would decrease the number of sends and receives by a factor of two, reducing network traffic and making the polling scanner more efficient.
Also, Ethernet virtually allows parallel data via the collision resolution. However, with switch infrastructure, this parallel model can be exploited further. It's possible that a data read or write request could be broadcasted or multicasted to automation devices; this would allow simultaneous updates with a single send, as well as a parallel read receive with a single read request. This feature isn't possible on devices that don't support transmission resolution.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
|PostgreSQL, the NoSQL Database||Jan 29, 2015|
|HPC Cluster Grant Accepting Applications!||Jan 28, 2015|
|Sharing Admin Privileges for Many Hosts Securely||Jan 28, 2015|
|Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform||Jan 23, 2015|
|Designing with Linux||Jan 22, 2015|
|Wondershaper—QOS in a Pinch||Jan 21, 2015|
- PostgreSQL, the NoSQL Database
- Sharing Admin Privileges for Many Hosts Securely
- HPC Cluster Grant Accepting Applications!
- Designing with Linux
- Wondershaper—QOS in a Pinch
- January 2015 Issue of Linux Journal: Security
- Internet of Things Blows Away CES, and it May Be Hunting for YOU Next
- Ideal Backups with zbackup
- Red Hat Enterprise Linux 7.1 beta available on IBM Power Platform
- Slow System? iotop Is Your Friend