Improving Network-Based Industrial Automation Data Protocols
With the traditional bus of a limited number of devices, an application could scan every possible address combination to see which addresses respond. This isn't practical with IP. If a IP host were configured on a Class A subnet, there may be 24 million possible slaves to poll.
A Simple method exists to poll for these automation devices. Say a UDP port is reserved for a service locator. When this service locator receives a specific message, it will reply with its device type. An IP host could then perform a UDP broadcast, and every slave could reply back to the broadcaster (one transmission, and all available automation devices report back to the transmitter).
To sidetrack for a moment, if you hear someone say they were to Telnet port 80, you'd immediately think HTTP. In fact, very few reserved ports exist for industrial automation, not that it's important. What is important is if I connect to TCP port 12345, it better be what I think it is. But, if I Telnet port 80 and I didn't know what port 80 was, how would I know (the blinking cursor doesn't help much either)?
If we were to add the ability for the service locator to describe which ports are available for which data protocol, this would mean that my host computer could first broadcast to see which devices are available and then determine what services are configured where. This means that my industrial automation data protocol would be located on any port, even the IP protocol. This way, TCP port 12345 is significant before I try and connect to it.
In industrial automation systems, maintaining a unit may require powering it down. The host sees this device's unavailability by the lack of a reply to a command. But would the host know when the device was turned back on? It has always been difficult to test if a device was available at a specific address. An application would have to attempt communicating with the device just to see if it was awake. This is a laborious and inefficient method to rediscover the device.
Much like a RARP, what if the unit broadcasted its awake state (when it was turned on)? The host system could receive this message and take action to commence communication with it. This would prevent a host from wasting its time polling a device that doesn't exist. I also believe most software engineers would say this method might be easier to implement, as the retransmission logic is fairly difficult and rather application-specific.
Of course, the skeptics are going to point out, what if this wake-up packet got dropped? It does happen, but the wake up could be set to always broadcast at a conservative time interval. Sure, one packet could get dropped, but would the next one? If it does, something might be wrong with the network. Certainly, a heartbeat couldn't hurt either; if the heartbeat rate is low enough, its extra traffic would be comparable to the ordinary broadcasts of the network.
There's another strength in this feature of systems that slowly poll the hardware, perhaps even waiting minutes. The traditional method is to transmit the command and wait for the response (if the device isn't available, no response will arrive). Wouldn't it be better if we knew the target automation device recently sent a heartbeat and that our next transmission will probably succeed? If we haven't heard a heartbeat from the specific device in a long time, why try and transmit to it?
Also, if the device's heartbeat had a beat count, a monitoring application could tell when the device was reset, when a packet was dropped, when power was cycled, when network infrastructure went down, when a gateway or telecom link failed or when some other strange anomaly occurred. These statistics may assist maintenance and diagnosing hardware or network problems.
Let's return to the Telnet to port 80 example. TCP allows a backlog of connections to accumulate. When you've successfully Telnet to port 80, how do you know whether the socket is still in the backlog queue or is actively being polled by the scanner?
This problem also has its roots in the traditional serial interface. The assumption is the slave is hard-wire connected. There is no connection backlog; what the host sends is received by the slave. The traditional method is to assume that once the communication resource is opened, it can simply send when it wants to.
What if the automation device's command processor can send an acceptance message to the host? This message could tell the host that this socket isn't just accepted but also actively being examined. The host will know that it can now send commands, and the automation device will immediately process it. Without this, the host could send a message down the socket before the command processor has accepted the socket.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- Easy Watermarking with ImageMagick
- The Only Mac I Use
- Promise Theory—What Is It?
- New Products
- Integrating Trac, Jenkins and Cobbler—Customizing Linux Operating Systems for Organizational Needs
- Tech Tip: Really Simple HTTP Server with Python
- RSS Feeds
- Returning Values from Bash Functions
- Raspberry Pi: the Perfect Home Server