Remote Temperature Monitoring with Linux
Two choices were available to perform resistance-to-temperature conversion in the script. I could use a lookup table with pairs of resistance-to-temperature values in an array. The sheer number of elements in this array would be a drawback to this approach. A span from -40 degrees C to +40 degrees C requires 81 (don't forget 0 degrees C) pairs of values. There was no easy way to manipulate a text file available from the thermistor manufacturer, and entering the values by hand would take time and be prone to errors.
Instead, I used what's called the Steinhart-Hart equation (see sidebar). The equation was developed in the late 1960s to help process ocean temperature data collected with thermistors and provides direct conversion of resistance to temperature. A spreadsheet utility found on the Web helped with calculating coefficients unique to each family of thermistors and was used in the equation.
Once the script calculates temperature from a multimeter reading, it needs to be displayed or stored. With this in mind, I extended the test script to convert and display temperature, and show the time and resistance reading. University Linux uses the 2.0 kernel, and root user login by Telnet is allowed. When ordinary users attempt to run the grabtemp.pl script, an error is displayed because of the file permissions used for the serial port, /dev/ttyS1. I fixed this by changing permissions with:
chmod a+x /dev/ttyS1
Now, ordinary users could log in and run the script to check temperature. They wouldn't need root access.
Here is the output from the resulting showtemp.pl script:
/perlserial: perl -w showtemp.pl 01-05-2006 14:43 34 F 1.3 C 30.52 k Ohms
Here you can see the date, time, temperatures in degrees F and degrees C, along with the actual resistance reading. I checked the temperature where the sensor was located and found that the reading was accurate, so the conversion formula part of the script worked.
Not too many computer users are comfortable with using a command-line program interface. Web browsers with a point-and-click interface are a lot less intimidating. So, I extended the script once again to allow users to operate the system with a Web browser.
With the thttpd server configured and running, it was just a matter of directing the output from the script to build a Web page for display. This was fairly straightforward as the following code shows: shows:
print "content-type: text/html \n\n"; print "<HTML><BODY><P>"; print "<HEAD><title>Remote Temperature Measurement Page</title></HEAD>"; print "<H2>Mechanical Room</H2> "; print '<form action="webtemp.pl" method=post> <P> <P>'; print "Interior Air Temperature = $out_tempF<BR>"; print "<BR>"; print "<BR>"; print "Date: $out_date <BR>"; print "Time: $out_time <BR>"; print "<BR>"; print '<input type=submit value="Update Reading">'; print "</form>"; print "</BODY></HTML>";
Running the webtemp.pl script from /cgi-bin gives the user a display like the example shown in Figure 1.
This example shows the temperature in the room as well as the time and the date of the reading. You can press the Update Reading button to rerun the script and display another temperature value.
It is easy to write an extension to the script to log temperature over time. I put a line in the rc (boot) script that launches a data logging script, which then runs continuously in the background. I found that I could use measurement intervals of 5-10 minutes, because changes in air temperature are slow indoors in an air-conditioned space.
You can access the temperature log through the command line by using Telnet. Because the format was space-delimited, the date file was used with Microsoft Excel to plot graphs and view trends. You can see a sample output in Figure 2.
The overall objective was to create a reliable and easy-to-use electronic means to display and record temperature data. When you actually deploy the system, the location of the system and the network connection can vary widely. Depending on circumstances, you have to evaluate the security concerns for each installation. You may have to implement some workarounds to address the security concerns. For example, you can log temperature readings in the form of text or HTML pages by a script running in the background and not by a script in the cgi directory, which isolates the logging process from Web access. Alternately, you can gather data from this server using another secure server through FTP or HTTP. This would add another layer to prevent direct access by the outside world, but still make the information available.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?