Automating Manufacturing Processes with Linux
A manufacturing company makes money only when production is running. So, timely information from the production floor is crucial to the business. As our company has grown, so has the complexity. We have outgrown our manual and paper methods for monitoring manufacturing.
Midwest Tool & Die stamps electronic terminals and molds plastic parts for the automotive, electronics and consumer industries. Our manufacturing processes generate a lot of data. Our high-speed presses make up to 1,200 parts per minute, and each part must be correct. We inspect critical dimensions for every part that is produced. Part quality is charted and monitored, and the data is archived for traceability.
We needed to manage all of this data to improve the manufacturing processes. Our main goals were to improve uptime and understand the causes for downtime. In addition, we hoped to track and control costs, reduce paperwork and avoid human input error.
To meet these goals, we came up with requirements for the new system. The first requirement was to gather data from a variety of machine controls, sensors, automated inspection equipment, programmable logic controllers (PLC) and human operators. The system had to be reliable and be capable of gathering data at our fastest production rate.
Next, the system had to correlate the data that was gathered. The system would need to interact with enterprise PostgreSQL databases. Production data and process status would be passed to PostgreSQL for display and reporting.
The new automation system also had to provide a user interface, so the machine operator and maintenance personnel could log their activities. Process downtime and the reason for the downtime would be logged and passed to the enterprise databases. This requirement would replace a paper log and manual data entry effort.
Finally, the system needed to be flexible and easily upgraded. The solution would be adaptable to new manufacturing lines and changing system inputs.
We evaluated several solutions to meet the requirements. Industrial PLCs could gather data reliably. However, their approach to networking has been stuck in proprietary nonstandards for decades. Ethernet connectivity has become available, but the systems are expensive. The user interface typically is implemented on vendor-specific display hardware. Each vendor produces its own proprietary development platform. So, vendor lock-in was an issue at every point of the evaluation.
Next, we looked at a PC with a data acquisition (DAQ) board. In the past, we have used a laptop with a DAQ board, Microsoft Windows and Agilent VEE. This combination has worked well for quickly acquiring data with little programming. With that setup, data transfer to our database systems was available only through Windows OLE. We could develop applications, but the proprietary platform would tie us in to the vendor. National Instruments also offers a complete DAQ package for the PC, but at a premium price.
The solution that best met our requirements also used a PC and a DAQ board. The big difference was the use of RTLinux, a real-time OS stack based on Linux. We could limit vendor tie-in and communicate freely with PostgreSQL and TCP/IP networking. The real-time OS offered the reliability of a PLC, without sacrificing communications. Finally, the GUI could be written in the language of our choice. Using open-source tools, we could create flexible, upgradeable applications.
The computers we chose to perform the task of data acquisition and handling of the data were slow in comparison to computers available today. We were able to recycle old office computers to use on the shop floor. These 400MHz Celeron machines are fast enough to perform the tasks asked of them adequately, without hindering our hard real-time requirements for data acquisition. The system we worked with started off with an installation of the Red Hat 7.3 distribution with the 2.4.18 Linux kernel.
The kernel separates the user-level tasks from the system hardware. The standard Linux kernel allots slices of time to each user-level task and can suspend the task when the time is up. This can lead to priority tasks being delayed by your system's noncritical tasks. There are commands to control the operations of the Linux scheduler; however, hacking the scheduler's parameters in the 2.4 kernel does not result in a hard real-time system. The 2.6 kernel has enhanced real-time performance but does not fill the needs of a hard real-time system either.
There are many great publications about RTLinux, many of which are written by Michael Barabanov and Victor Yodaiken, who first implemented RTLinux back in 1996 and have been improving it ever since. Finite State Machine Labs, Inc., (FSMLabs) is a privately held software company that maintains the software. Through the years they have produced advancements that are wrapped up into their professional versions of RTLinux. They still do provide RTLinux/Free, which must be used under the terms of the GNU GPL and the Open RTLinux Patent License. For our project, we used the free software, which does not have support from FSMLabs.
The RTLinux HOWTO, by Dinil Divakaran, provided the majority of the information we needed to complete the RTLinux installation and get it up and running on our system.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Control Your Linux Desktop with D-Bus
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide