At the Forge - Incremental Form Submission
Computers are amazingly fast. Think about it—we measure raw processor speed by how many instructions it can execute each second, and that number has gotten so large, we round off to the nearest hundred million.
Of course, it's often hard to feel that computers are all that speedy, especially when you're sitting around waiting for them to compete a task. Sometimes that wait has to do with complex algorithms that take a while to execute. But in many cases, the problem is a delay further down in the system, which is causing your end-user application to wait for a while.
This, I believe, is the major Achilles' heel of the world of Web services—Web-based APIs that are making it increasingly easy to combine (and manipulate) data from multiple sources. Web services may be revolutionizing distributed application development and deployment, but they make it tempting (and too easy, sometimes) to create software whose performance depends on someone else's system.
For example, let's assume you are offering a Web service and that your program depends, in turn, on a second Web service. Users of your system might encounter delays at two different points: your Web service (due to computational complexity, a lack of system resources or too many simultaneous requests), or the second Web service on which yours depends.
Several commercial companies, such as Google, eBay and Amazon, offer Web services to the general public. But, these services lack any sort of uptime or response guarantee and often restrict the number of requests you can make. If you write a Web service that depends on one of these others, a rise in requests to your service might well mean that you temporarily go over your limit with these services.
This is particularly true if you allow users to enter one or more inputs at a time. For example, if you're running a Web-based store, you want to let people put multiple items in their shopping baskets. It's easy to imagine a scenario in which each item in the shopping basket requires a call to one or more Web services. If each call takes one second, and if you are allowed to access the Web service only every six seconds, a user trying to buy ten items might end up waiting one minute just to see the final checkout screen. If a brick-and-mortar store were to keep you waiting for one minute, you would be frustrated. If an on-line store were to do the same thing, you probably would just pick up and leave.
So, what should you do? Well, you could simply throw up your hands and blame the lower-level service. Or, you could contact the lower-level service and try to negotiate a faster, better deal for yourself. Another option is to try to predict what inputs your users will be handing to you and try to preprocess them, perhaps at night, when fewer users are on your system.
I've recently come across this problem myself on some of the sites I've been developing in my consulting work. And, I believe I've found a technique that solves this problem without too much trouble and that demonstrates how Ajax programming techniques not only can add pizazz to a Web site, but make it more functional as well. This month, we take a look at the technique I've developed, which I call (for lack of a better term) incremental form submission.
Before we continue, let's define the problem we are trying to solve. Users visiting our site are presented with an HTML form. The form contains a textarea widget, into which users can enter one or more words. When a user clicks on the submit button, a server-side program takes the contents of the textarea and sends it to a Web service that turns each word into its Pig Latin equivalent. The server-side program retrieves the results from the Web service and displays the Pig Latin on the user's screen as HTML.
It goes without saying that this example is somewhat contrived; although it might be nice to have a Web service that handles translations into Pig Latin, it takes so little time to do that translation (really, a simple text transformation), that storing or caching this information would be foolish. That said, this example is meant to provide food for thought, rather than a production-ready piece of software.
Let's start with our HTML file, shown in Listing 1. It contains a short HTML form with a textarea widget (named words) and a submit button.
Listing 1. pl-words.html
<html> <head> <title>Pig Latin translator</title> </head> <body> <p>Enter the words you wish to translate into Pig Latin:</p> <form method="POST" action="pl-words.cgi"> <textarea name="words">Enter words here</textarea> <p><input type="submit" value="Translate" /></p> </form> </body> </html>
Clicking on that submit button submits the form's contents to a CGI program written in Ruby, named pl-word.cgi (Listing 2). There are two sections to Listing 2. In the first part of the program, we define a method, pl_sentence, that takes a sentence (that is, a string), turns it into an array of strings (with each word in one string), and then passes that array to our Web service (via XML-RPC). The second half of the program takes the input from our POST request, passes it to the pl_sentence routine, and then uses the output from pl_sentence to create a bit of nicely formatted (if Spartan) output for the user.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide