Linux and RTAI for Building Automation
The main control task, the network access-control task and the software driver for the physical layer of the RS-485 network are the tasks that run in the real-time executive. An RS-485 driver was developed for RTAI. This driver is similar to any other serial driver, except for the 9th-bit protocol used in this application, as described above.
The other real-time task is the network access-control task, which is in charge of periodically sending packets to each network node. This packet can be a command to generate an IR signal, a poll to see if the node is active or a command to the microcontroller to transmit the actual room temperature. The node answers with an acknowledgement to the first two types of packets and with the actual room temperature to the last one. The information about the actual state of every node is available to the main control task, which in turn informs the user interface if a node fails.
The main control task, using information retrieved from the database, operates the air-conditioning equipment in the building, as programmed. This task also can receive instructions from the user interface that overrides the programmed configuration, using two RT-FIFOs. RT-FIFOs are an interprocess communication routine for communication between real-time tasks and normal Linux tasks. To communicate with the PostgreSQL database, a Linux dæmon was developed. This dæmon communicates with the main control task using two more RT-FIFOs. An additional important function of this dæmon is to send to the main control task the system date and time; no support for reading it exists in RTAI.
The developed system sends commands to the air conditioners, eliminating the need for local remote controllers. We do not interfere with the air-conditioner temperature control system, nor do we touch any internal circuitry. Each air conditioner has its own temperature control system built-in, and the temperature sensor in each microcontroller supervises that the equipment is working fine. Figure 4 shows the microcontroller board installed.
The Linux tasks are in charge of presenting the user interface through a Web server and running the PostgreSQL database engine, which is the main data repository. As described above, another Linux side task is a dæmon used for the RTAI main control task to access the system date/time and the database.
The user interface is simple. The first page presents information about the actual state of each air conditioner. Every type of user can access this page. In order to change the program or send commands to a particular air conditioner, the system asks for a user name and password. PHP is used to generate the Web pages dynamically to present the information retrieved from the database.
In the PostgreSQL database, the system stores general information about the air conditioners, such as BTU, location, brand and microcontroller network node address; the programmed operations; and the IR commands needed to operate each air conditioner.
An important part of the system is the module that reads the air-conditioner remote controller signals and stores the information, associated with the corresponding equipment, in the database to reproduce it using the networked microcontrollers. This module is used only when adding a type of air conditioner that has a different brand and/or different remote controller commands.
Two tasks are part of this module: the first is a real-time task that reads the IR signal. The LIRC Project as well as the Ripoll and Acosta paper in the on-line Resources, present detailed information about IR remote controllers and sample implementations using normal Linux and RTLinux, another real-time executive for Linux. The other task for this module is the user interface that runs on Linux. The two tasks communicate using an RT-FIFO.
Due to the small amount of RAM available in the microcontroller and the long IR signal duration, an important function of this software is to help the user obtain repetitive patterns within the different IR remote controller signals associated with each button or combination of buttons. These patterns are coded in the firmware of the microcontroller and are used to reconstruct the command to control the equipment. For example, if there are ten different patterns, the information sent to the appropriate microcontroller in the network is something like: repeat pattern one ten times, then pattern two three times and so on, until the complete command is reconstructed. This technique has the advantage of using fewer resources for signal reconstruction. The disadvantage is the software of the microcontroller needs to be changed to introduce the patterns of the newly added equipment whenever a new air conditioner is introduced.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?