Creating Web Plots on Demand
gnuplot will automatically label the x and y axes with values from our data set. If the maximum number of web hits for any time interval in the data set was 48, then gnuplot might create tic marks on the y axis at 0, 10, 20, 30, 40 and 50. This would be fine for the y axis, but remember that the x axis contains a calculated time value that might not make much sense to a person reading our graph.
Luckily, we can override the default tic marks and create our own. That's the job of the for loop at the beginning of section five. We create a text string that will be embedded in the gnuplot command file we are about to create.
gnuplot creates plots based on a set of commands you provide it. Actually, gnuplot has only two commands for plotting data, plot and splot, and we'll be using the simpler of the two in our program. The other command we'll use is set to enable and disable particular options and features in gnuplot. The Perl print statement in section five of Listing 1 handles writing a gnuplot command file to disk.
Those not overly familiar with Perl may find this variation of the print statement somewhat confusing. Let's look at that line:
print GPFILE <<EOM;
This print command tells Perl to write every line in the Perl script following it to the file opened with the GPFILE file handle. It stops printing lines to the file when Perl encounters the line in the script consisting only of the letters EOM. So all the lines in section five following the print statement and continuing to the EOM line are not Perl statements at all, but are gnuplot commands that will be written by Perl to the gnuplot command file.
The print command will also substitute the Perl variable references with their values, so the line:
set title "Web Server Accesses $mon $day, $year"
might be written to the file as:
set title "Web Server Accesses Aug 12, 1997"
Now that we have a data file and a gnuplot command file, how do we get our plot? And how do we show it to the user? This turns out to be the easy part.
First, we need to talk a bit more about gnuplot. I mentioned earlier that gnuplot can display to a variety of devices, including X terminals. But gnuplot can also write a file in portable pixmap format (PPM). In section five of the Perl script, note that we write the gnuplot command:
set term pbm color
to the gnuplot command file. This tells gnuplot to write the output to a file.
However, this only takes us part of the way. Web browsers don't know what to do with PPM files; they usually want either a GIF or JPEG file. That's where the NetPBM package comes in. This is a set of command line utilities that convert from one format to another. One of those tools is exactly what we're looking for: the aptly named ppmtogif. ppmtogif is a simple filter program. It takes a PPM image file from standard input and writes a GIF file to standard output. Since most web browsers support GIF, ppmtogif fits our needs perfectly.
Thus, section six turns out to be almost trivial. We use a Perl system call to run our UNIX commands, then run gnuplot using the command file we created in section five (which in turn will read the data file we built in section four). gnuplot sends the graphic in PPM format to standard output, and the ppmtogif filter turns it into a GIF file that is redirected to a disk file. We now have our graph in GIF format on disk.
The final step (section 7 of Listing 1) is to display the graph in the user's web browser. Remember this is a CGI program, so the browser that invoked the CGI Perl script is waiting to receive something from our web server. What we'll deliver is a brief HTML page containing an HTML image tag that points to the GIF file we just created.
We use the same technique, a multi-line print statement, to create the HTML the browser will receive. Note that we must prepend a “content-type” preamble to the data we send back to the browser. This lets it know to expect an HTML page, rather than some other type of data.
The user generates the plot by typing in the URL of the CGI script in their browser. If the server is called myserver and our script is saved as usage_graph_cgi in the web server's cgi-bin directory, then they would type:
To specify a date other than today, the user appends the date to the end of the URL, separated by a question mark:
http://myserver/cgi-bin/usage_graph_cgi?11/Sep/1997If all goes well, and there's no reason why it shouldn't, the user should see a page that looks something like Figure 1.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- LiveCode Ltd.'s LiveCode
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide