Dynamic Graphics and Personalization
Last month, we discussed the different ways a CGI program can create dynamically generated graphics output. That is, we wrote several programs in which the program describes its output not as “text/html”, but as “graphics/gif”.
This month, we will examine some more tools that allow us to create graphics dynamically. However, the graphics will have an additional twist this time, in that they will reflect an individual user's stock portfolio rather than a global set of data values.
As with any non-trivial software project, our first step must be to create a brief specification. In this particular project, we will have two major programs. In the first, the user will be able to create and edit a personal profile, describing the securities he or she owns. The second program will take the information in the user's profile and use it to create a personalized graphic stock portfolio.
This project brings together a number of tools we have discussed in previous installments of ATF. Nevertheless, it seems like a good idea for us to review them, since we are going to call on so many.
MySQL: MySQL is a small, inexpensive relational database available for Linux and many other operating systems. (See “Resources” for information on where to get it.) In addition to its low price, MySQL is quite fast and efficient, which makes it popular on many web sites. As a relational database, MySQL forces us to store information in one or more tables, in which each row refers to a separate record. As with most relational databases, we communicate with MySQL using SQL, a database query language. We cannot write programs in SQL; rather, we must embed our queries inside of a program written in a full programming language. In our case, that language is Perl.
DBI: While SQL might be a standard query language for communicating with databases, the software and libraries used to speak with those databases vary considerably. To talk to an Oracle server, you need Oracle libraries; to talk to a MySQL server, you need MySQL libraries, and so forth. As a result, the Perl database world was fractured for a long time, with special versions for individual databases.
Now, however, there is a better way: the DBI module standardized the API to relational databases, meaning that programmers moving from one database server to another have to learn only the different nuances of the SQL implementations. Previously, they had to learn a separate Perl API as well, which was frustrating. This was accomplished by separating the database code into two parts, one generic (DBI) and the other specific to each server (DBD). In order to use DBI, you will need to install the generic DBI libraries, and then one or more DBDs appropriate for the products you use.
There is a problem with DBI and the Web, however, which has to do with the way in which database servers were designed. In general, they expect a client program to open a connection, perform many queries, then disconnect. Opening a connection is thus quite slow and inefficient. When a CGI program is a database client, it must open a new database connection for each HTTP transaction. See the mod_perl section immediately below for one solution to this problem, Apache::DBI.
mod_perl for Apache: Web servers traditionally provided custom and dynamic output by invoking external programs, using the CGI standard. An HTTP server would pass information to the CGI program, which would then be expected to send its output to the user's browser. This output generally came in the form of HTML-formatted text, but as we saw last month, it is possible to produce graphics as well.
However, CGI is quite slow; every invocation of a CGI program requires a new process to be created. If you are using Perl, each invocation requires the program to be compiled into Perl's internal format, then executed.
An alternative method is to use mod_perl, a module for the free Apache HTTP server that embeds a fully working version of Perl inside the server. This has several ramifications, one of which being the fact that we can now create custom output without having to rely on external programs.
When using mod_perl, you can take advantage of a module known as Apache::DBI. This module pretends to work the same as DBI, but actually caches database handles ($dbh) across invocations. So even when your program thinks it is opening a new database connection, it is actually reusing a database handle from a previous invocation.
GIFgraph: The GIFgraph set of Perl modules allows us to create charts and graphs on the fly, from within CGI programs or mod_perl modules. We explored a basic use of GIFgraph last month. As its name implies, GIFgraph produces output in GIF format. Last month, we saw how to return GIFs directly to the user's browser. This month, we will instead save the resulting graphs to individual files, to which we will create hyperlinks.
Apache::Session: HTTP, the protocol on which the Web is based, was designed to be lightweight and simple. As part of this consideration, it was also designed to be “stateless”, meaning each transaction is independent. This creates a problem, however, in that you will often want to keep track of which user is which. For instance, in this application, we want to ensure we are tracking the correct user's portfolio. Without state, we cannot track portfolios at all, let alone for multiple users. Apache::Session, as we will see below, allows us to get around this by using a database and HTTP cookies to store one or more pieces of information using a unique identifier.
With the above five technologies, we can create a fairly impressive stock portfolio tracker that allows users to define which securities they own and view their current holdings in graphical format. As presented this month, the application is admittedly a bit crude, but it should show you how easy it is to write such an application and how flexible the above tools can be in creating one.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
3 hours 53 min ago
- Please correct the URL for Salt Stack's web site
7 hours 4 min ago
- Android is Linux -- why no better inter-operation
9 hours 20 min ago
- Connecting Android device to desktop Linux via USB
9 hours 48 min ago
- Find new cell phone and tablet pc
10 hours 46 min ago
12 hours 15 min ago
- Automatically updating Guest Additions
13 hours 24 min ago
- I like your topic on android
14 hours 10 min ago
- This is the easiest tutorial
20 hours 46 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?