Dynamic Graphics and Personalization
Our second mod_perl handler is StockReport.pm (see Listing 2 in the archive). This module uses the portfolio information the user has entered, and creates one or more graphs based on it.
If the user has a portfolio defined, then we iterate through each of the symbols in it. We then SELECT all of the rows in StockValues with that symbol, retrieving them in date order:
my $sql = "SELECT value,date FROM StockValues "; $sql .= "WHERE symbol = \"$symbol\" "; $sql .= "ORDER BY date ";
Now we iterate through each of the returned rows, adding the value to the @values array and the date to the @dates array. We also calculate the value of the user's holdings on that day (by multiplying the number of shares by the share price) and put that in the @holdings array.
We then plot our data set by creating a @data array, in which the elements are references to @dates, @values and @holdings. @dates will be used as the X axis in our graph, while @values and @holdings will be plotted as well. Because each element of @holdings is bound to be a multiple of its counterpart in @values, we tell GIFgraph to use two Y axes—one for values and one for holdings.
We create the graphs themselves using GIFgraph's plot_to_gif method, which takes a set of data points (the @data array, StockReport.pm), creates a graph in GIF format, then saves it to disk. We set the filename in a variable, so that we can both save the file and refer to it in an IMG tag. Remember, the file must be in the web document tree in order for it to be available to the user's web browser!
It might be tempting to put such files in /tmp, the standard temporary directory for Linux systems, but then the graphics will be unavailable to outside browsers. This directory must be writable by the web server, which often means making it open to more people than the rest of your web hierarchy. If this is the case on your system, make sure only this directory is writable by others, so that you don't run the risk of an intruder viewing or damaging your site's sensitive files.
Creating files in this way works well, but with one major flaw: it has the potential to fill your file system with numerous old graphics. A number of methods can be used to overcome this problem, but perhaps the simplest is to use cron to identify and delete any file older than a certain time. Depending on how busy your site is, you might want to run such a cron job every ten minutes, every hour, or once a month. It all depends on how many visitors you receive and how large your disk is. It is probably better to run such a deleting program more often, so as to avoid a denial-of-service attack that could fill your disks.
While I did not implement it in this version of StockReport, you can probably see how easy it would be to allow users to choose the range of dates in the graph. Using an HTML form, you could allow users to choose the starting and ending dates; the values of those form elements could then be inserted into the SQL query so as to SELECT just those rows between the named dates.
Once we have written and installed StockProfile.pm and StockReport.pm into our Perl module hierarchy, we must somehow tell Apache when to use them. We can do this in a number of ways, but my preference is to create special URLs that invoke these modules. That is, every time someone requests the URL “/stock-profile” from our server, they should get the profile editor. By the same token, when someone requests “/stock-report” from our server, they should see a report of their current stocks.
In order to accomplish this, we must first load each of the modules by adding the following two lines to the Apache configuration file, httpd.conf:
PerlModule Apache::StockProfile PerlModule Apache::StockReport
Once we have done that, we can create new URLs, which do not necessarily correspond with files on the server's file system. For this, we use <Location> sections in httpd.conf. We indicate that the URL in question should be handled by a Perl module (“SetHandler perl-script”), and then tell Apache which specific module to use for that particular URL:
<Location /stock-profile> SetHandler perl-script PerlHandler Apache::StockProfile </Location> <Location /stock-report> SetHandler perl-script PerlHandler Apache::StockReport </Location>You will need to restart Apache in order for these new URLs to work. If there is an error in one of the Perl modules, or if mod_perl cannot find one of the modules in the module path @INC, the restart of Apache will fail. This ensures you will not have any compile-time errors when your modules run under mod_perl. At the same time, it requires that you test your modules extensively before including them on a live site, since bringing down the server on a large web site can be embarrassing or financially difficult.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Returning Values from Bash Functions
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Managing Linux Using Puppet
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide