Using Linux with Programmable Logic Controllers
When solving control system problems for the “real world”, a toolkit approach to problem-solving often leads to quicker and more robust solutions. This is one of the reasons we are using Linux on commercial Intel-based machines at the DuPont-Northwestern-Dow Collaborative Access Team (DND-CAT) at the Advanced Photon Source (APS). The APS (see http://www.aps.anl.gov/) is one of three third generation synchrotron X-ray sources that will provide the world's most brilliant source of X-rays for scientific research. The DND-CAT (see http://tomato.dnd.aps.anl.gov/DND/) is a collaboration formed by the DuPont Company, Northwestern University, and the Dow Chemical Company to build and operate scientific equipment at the APS to study industrial and academically interesting problems in chemistry, biology, materials science and physics. Linux (like all Unix systems) is designed around the toolkit paradigm. The tools which run under Linux provide an excellent framework for building user interfaces (e.g., Netscape, Java, Tcl/Tk, expect, World Wide Web daemons), running calculations (e.g., C, C++, FORTRAN, Perl, pvm) and interacting with external devices (GREAT access to serial devices, cards in the backplane, and of course, TCP/IP).
However, while there are efforts to equip Linux with real-time capabilities, it is not a “real-time” operating system. In addition, using commercial personal computers for control applications is a mixed blessing at best. While the systems are powerful, readily available and inexpensive, they also come with a limited number of slots on the backplane and the machine usually must be physically close to the process being controlled or monitored. This can be problematic in situations where the process takes place in a harsh environment that might cause the hardware to fail (e.g., high radiation areas, high vibration, etc.). These are important factors in the design of an entire control system. However, they are only problems if we expect Linux to provide the entire solution to the control problem rather than one tool in a toolkit approach.
At the DND-CAT, we have been designing systems that use programmable logic controllers in conjunction with Linux PCs to provide low cost automation and control systems for scientific experimental equipment.
Programmable logic controllers (PLCs) are the unsung heros of the modern industrial revolution. Long before IBM and Apple were churning out computers for the masses, factories were being automated with computerized controllers designed to interface with the “real world” (i.e., relays, motors, temperatures, DC and AC signals, etc.). These controllers are manufactured by many companies like Modicon, Allen-Bradley, Square D and others. In his booklet, History of the PLC, Dick Morley, the original inventor of the PLC, notes that the first PLC was developed at a consulting company, Bedford Associates, back in 1968. At this time, Bedford Associates was designing computer-controlled machine tools as well as peripherals for the computer industry. The PLC was originally designed to eliminate a problem in control. Before the digital computer, logic functions were implemented in relay racks where a single relay would correspond to a bit. However, relays tend to be unreliable in the long term and the “software” was hard programmed via wiring.
System reliability could be improved by replacing the relays with solid state devices. This had the advantage that the system was maintainable by electricians, technicians and control engineers. However, the “software” was still in the hard wiring of the system and difficult to change. The alternative at this time was using one of the minicomputers being developed, like the PDP-8 from Digital Equipment. While more complex control functions could be implemented, this also increased the system complexity and made it difficult to maintain for people on the factory floor.
Morley designed the first PLC to replace relay racks with a specialized real-time controller that would survive industrial environments. This meant that it had to survive tests, such as being dropped, zapped with a tesla coil and banged with a rubber mallet. Designed for continuous operation, it had no on/off switch. The real-time capabilities were—and for the most part still are—programmed into the unit using ladder logic.
Ladder logic is a rule-based language; an example is given in Figure 1. The line on the left side of the diagram shows a “power rail” with the “ground” for this rail on the right hand side (not shown). The rules for the language are coded by completing “circuits” in ladder rungs from left to right. In the diagrams, “||” corresponds to switch contacts, and “()” corresponds to relay coils. Slanted bars through the contacts and coils denote the complement. The “X” switch contacts are mapped to real binary input points, the “Y” relay coil contacts are mapped to output points, and the “C” contacts/coils are software points used for intermediate operations. In the example, closing X0 and C0 or opening X0 and C0 will energize the C10 coil thereby closing the C10 contact. The C10 contact activates Y0 and turns off Y1.
While this graphical style of programming may be strange for someone accustomed to programming in C or FORTRAN, ladder logic makes it easy for non-programmers to write useful applications. Most PLCs have a large set of functions, including timers, counters, math operations, bit shifters, etc. They have a wide variety of input and output devices, including binary and analog inputs and outputs, motor and temperature controllers, relay outputs, magnetic tachometer pickups, etc. The number of input and output points depends on the type and size of the PLC, but can range from less than 10 for a micro-PLC to over a thousand for one of the higher-end PLCs. The PLC market has grown over the years and has been affected by the computer revolution. Today, there are a number of high quality, inexpensive PLCs on the market. PLCs from the same vendor can often be networked together. In some cases, a lower-end PLC can be built for less than $500.00.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide