Automating Manufacturing Processes with Linux
A manufacturing company makes money only when production is running. So, timely information from the production floor is crucial to the business. As our company has grown, so has the complexity. We have outgrown our manual and paper methods for monitoring manufacturing.
Midwest Tool & Die stamps electronic terminals and molds plastic parts for the automotive, electronics and consumer industries. Our manufacturing processes generate a lot of data. Our high-speed presses make up to 1,200 parts per minute, and each part must be correct. We inspect critical dimensions for every part that is produced. Part quality is charted and monitored, and the data is archived for traceability.
We needed to manage all of this data to improve the manufacturing processes. Our main goals were to improve uptime and understand the causes for downtime. In addition, we hoped to track and control costs, reduce paperwork and avoid human input error.
To meet these goals, we came up with requirements for the new system. The first requirement was to gather data from a variety of machine controls, sensors, automated inspection equipment, programmable logic controllers (PLC) and human operators. The system had to be reliable and be capable of gathering data at our fastest production rate.
Next, the system had to correlate the data that was gathered. The system would need to interact with enterprise PostgreSQL databases. Production data and process status would be passed to PostgreSQL for display and reporting.
The new automation system also had to provide a user interface, so the machine operator and maintenance personnel could log their activities. Process downtime and the reason for the downtime would be logged and passed to the enterprise databases. This requirement would replace a paper log and manual data entry effort.
Finally, the system needed to be flexible and easily upgraded. The solution would be adaptable to new manufacturing lines and changing system inputs.
We evaluated several solutions to meet the requirements. Industrial PLCs could gather data reliably. However, their approach to networking has been stuck in proprietary nonstandards for decades. Ethernet connectivity has become available, but the systems are expensive. The user interface typically is implemented on vendor-specific display hardware. Each vendor produces its own proprietary development platform. So, vendor lock-in was an issue at every point of the evaluation.
Next, we looked at a PC with a data acquisition (DAQ) board. In the past, we have used a laptop with a DAQ board, Microsoft Windows and Agilent VEE. This combination has worked well for quickly acquiring data with little programming. With that setup, data transfer to our database systems was available only through Windows OLE. We could develop applications, but the proprietary platform would tie us in to the vendor. National Instruments also offers a complete DAQ package for the PC, but at a premium price.
The solution that best met our requirements also used a PC and a DAQ board. The big difference was the use of RTLinux, a real-time OS stack based on Linux. We could limit vendor tie-in and communicate freely with PostgreSQL and TCP/IP networking. The real-time OS offered the reliability of a PLC, without sacrificing communications. Finally, the GUI could be written in the language of our choice. Using open-source tools, we could create flexible, upgradeable applications.
The computers we chose to perform the task of data acquisition and handling of the data were slow in comparison to computers available today. We were able to recycle old office computers to use on the shop floor. These 400MHz Celeron machines are fast enough to perform the tasks asked of them adequately, without hindering our hard real-time requirements for data acquisition. The system we worked with started off with an installation of the Red Hat 7.3 distribution with the 2.4.18 Linux kernel.
The kernel separates the user-level tasks from the system hardware. The standard Linux kernel allots slices of time to each user-level task and can suspend the task when the time is up. This can lead to priority tasks being delayed by your system's noncritical tasks. There are commands to control the operations of the Linux scheduler; however, hacking the scheduler's parameters in the 2.4 kernel does not result in a hard real-time system. The 2.6 kernel has enhanced real-time performance but does not fill the needs of a hard real-time system either.
There are many great publications about RTLinux, many of which are written by Michael Barabanov and Victor Yodaiken, who first implemented RTLinux back in 1996 and have been improving it ever since. Finite State Machine Labs, Inc., (FSMLabs) is a privately held software company that maintains the software. Through the years they have produced advancements that are wrapped up into their professional versions of RTLinux. They still do provide RTLinux/Free, which must be used under the terms of the GNU GPL and the Open RTLinux Patent License. For our project, we used the free software, which does not have support from FSMLabs.
The RTLinux HOWTO, by Dinil Divakaran, provided the majority of the information we needed to complete the RTLinux installation and get it up and running on our system.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Google's Abacus Project: It's All about Trust
- Back to Backups
- Secure Desktops with Qubes: Introduction
- Secure Desktops with Qubes: Installation
- Linux Mint 18
- Fancy Tricks for Changing Numeric Base
- Working with Command Arguments
- Seeing Red and Getting Sleep
- CentOS 6.8 Released
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide