Using Linux with Programmable Logic Controllers
In our applications, we have used Linux computers as interfaces between the PLC and the outside world. Our main purpose for this was to use tools like Tcl/Tk and the World Wide Web (WWW) through the Common Gateway Interface (CGI) to control processes in the PLC. The software used to program our PLCs is Microsoft Windows-based. Consequently, by having Linux and Windows partitions on the same hard disk, we can toggle back and forth between MS Windows, where we program the PLCs, and Linux, where we program and use the operator interface.
The real-time operating system inside PLCs is relatively simple compared to a complex system such as Linux. Consequently, those portions of the control process which require extremely high reliability can be programmed into the PLC, leaving Linux available for other tasks.
We are using the PLC Direct line of PLCs (see http://www.plcdirect.com/) for our applications. In order to prove the reliability of the Linux and PLC Direct combination, we collaborated with the UNICAT (A University-National Lab-Industry Collaborative Access Team; see http://www.uni.aps.anl.gov/) to set up a test system using a Linux-based web server connected to a PLC Direct 405 PLC. Communication with the PLC is through a multidrop, packet-based, master/slave protocol that runs over a serial link. Using the PLC Direct documentation, we implemented this protocol using Don Libes' expect program over the Linux serial ports, making the Linux system the master. This gave us the capability to “peek” and “poke” into the PLC memory map. CGI scripts call the expect program to provide access to the Web.
Photo of the DND-CAT/UNI-CAT Linux/PLC Test Stand
Net surfers were allowed to close output points and read input points on the PLC through the WWW interface between March 1995 and July 1996. A photo of that test stand is shown in Figure 2. The PLC monitored digital inputs from the 5 Love controllers, which measured the temperature in the PLC CPU and closed a contact if the temperature went above a preset value.
In addition to the demonstration, we have used the Linux/PLC combination in three project areas: a simple shutter for a synchrotron X-ray beamline, a personnel safety system for an analytical X-ray machine, and an equipment protection system using a high intensity X-ray beamline at the Advanced Photon Source.
Simple Ladder Logic Diagram
Our simplest application with the PLC and Linux involved interfacing a commercial X-ray beam shutter to our Linux data collection computer. The hardware for the X-ray shutter is controlled by two relay-actuated solenoids. When we program the PLC, we allocate two ranges of control relays to act as an interface between the PLC and Linux. The program in Figure 1 demonstrates this. The Linux program that would set C0. X0 is attached to a hardware switch and provides an external input to the system. The X0 and C0 combination simulate a three way switch, and Y0 and Y1 actually operate the relays on the shutter. A program on the Linux side can read C10 to monitor the shutter status. With the interface between the PLC and Linux defined through control relays, the actual control process is divided up between the two different machines.
Our second project used the PLC as a state machine to monitor a radiation enclosure for an X-ray generator and X-ray tube. Since this was a safety device, we enabled the PLC's password function to lock the program into the PLC CPU. If for some reason we forget the password, the CPU must be sent back to the manufacturer for reset. The PLC monitors twelve door contact switches, switches from an operator panel, X-ray shutter positions, water flow interlocks for the X-ray tube, as well as providing a buzzer and a fail safe lamp to notify the operator the X-rays and shutter are on. The PLC also provides enable signals for the X-ray generator and the X-ray shutter.
While the main purpose of the PLC is to protect the operator, the PLC doesn't have a very good way of notifying the operator of what has failed should the X-ray interlock trip. This is where Linux comes in. Using CGI scripts, we wrote web pages that allow the operator to query the PLC state using a browser. To prevent unauthorized access to the equipment (only trained people can use this equipment), we provided a watchdog signal between Linux and the PLC. An authorized user logs into the Linux system and runs a protected daemon which starts the watchdog timer in the PLC. The Linux daemon must continuously restart the watchdog to keep the X-ray system enabled, and the daemon disables the system when the user logs out. Linux keeps track of all of the accesses to the system and sends e-mail to the X-ray generator custodian whenever an access occurs. Thus, the Linux system acts as the data collection computer for the instruments attached to the X-ray generator.
Our last project is an equipment protection system for an X-ray synchrotron beamline at the Advanced Photon Source. In this case, the PLC is monitoring over 70 input points from water flow meters, vacuum system outputs, and switches from vacuum valves. Based on the status of these systems, the PLC sends an enable or disable signal to the APS which permits them to deliver the high intensity X-ray beam to our equipment. Serious equipment damage can occur if the APS delivers beam when the systems are not ready. In this case, we use Linux as a data logger as well as an operator interface. Every few minutes, Linux polls the PLC to log system status.
The PLC itself keeps a log of significant events in nonvolatile memory in the event of power failures. In order to keep the PLC in sync with the Linux logs, we use the Network Time Daemon on the Linux end and once a day reset the real-time clock in the PLC. In addition to the PLC, Linux processes are monitoring other devices, like vacuum gauges, through a multiport serial card. If a system failure occurs, our scientists and engineers can either log into the Linux system and run expect scripts to diagnose the problem or use a browser and interact with the Linux/PLC combo via the Web. At this point, the operator has complete control over enabling and disabling processes in the PLC.
In this application, the interface with the World Wide Web is extremely important. Scientists travel to synchrotron sources from all over the world to conduct experiments. When the facility is operational, it runs twenty four hours a day. If our PLC were to shut the equipment down, it is important to be able to diagnose the fault, and if possible, return the equipment to operational status as quickly as possible. By using the World Wide Web, we provide our scientists and engineers with diagnostic tools they can use from anywhere using commonly available interface software. I personally have been able to monitor system status in our PLCs at my desk at work, my apartment in Chicago and a cyber-cafe in London.
In general, we have found combining programmable logic controllers with Linux to be a cost effective and robust method for providing specialized control systems at the DuPont-Northwestern-Dow Collaborative Access Team. As we build our instrumentation, we continue to find new applications for this combination. We have several more projects in the works, including using the PLCs to construct intelligent controllers for specialized machines and using Linux to interface with them. We also plan to implement the PLC Direct slave protocol under Linux to allow the PLC to send data directly to Linux daemons, so the PLC does not need to be polled.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- SourceClear Open
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide