Web Servers and Dynamic Content
When web servers first appeared their primary purpose was to serve up selected information from the machine on which they ran. The idea was to simply take the contents of a file and transmit them over a TCP connection in HTTP format. The inherent limitation discovered early on was that dynamic content could not be delivered, and the CGI interface was defined and added to web servers to address this.
The Common Gateway Interface (CGI) provides a way for the web server to execute a process whose output is determined by the code that it executes, and then take this output and pass it back to the client browser as if it were the contents of a static file. Since that time many variations that combine scripting engines and CGI have evolved that simplify the job of the programmer and make the execution of multiple external threads more efficient (Perl, Python, PHP, etc.). However, for the most part, these scripting languages have the drawback of needing to be interpreted. Also, they overlook the fact that enormous bodies of code already exist in such languages as C and Fortran (remember it?) to solve complex, computationally intensive applications. While building dynamic content based on the results of queries is very useful, it is not realistic to apply image data transformations or calculate Fourier Transforms using these scripting languages, as the time required to complete such calculations is very long with respect to the response time which the user expects.
Therefore, because of the existing code base and the existence of dynamic content that has to be generated by some algorithms that cannot be efficiently implemented in scripting languages, there is a definite need for developing programs that use the CGI, written in a traditional language like C or Fortran and compiled down to native code.
There are two ways in which data in the HTTP protocol is passed from the browser to the web server: the GET data (specified as part of the URL, commonly that part that appears after the “?” in the URL) and the POST data (the collected name-value pairs of all of the fields in the form that is being submitted by the web browser). To figure out the GET data, just look at the URL—http://www.mydomain.com/pages/external.cgi?additional-data.
GET data portion: additional-data
To figure out the POST data look at the source of the document generating the request, and find the form that is being submitted:
<FORM> <INPUT TYPE="text"NAME="fld01" VALUE="val01"> <INPUT TYPE="hidden"NAME="fld02" VALUE="val02"> <INPUT TYPE="checkbox"NAME="fld03" CHECKED> <SELECT NAME="fld04"> <OPTION VALUE="val03"> Val-03 <OPTION SELECTED VALUE="val04"> Val-04 <OPTION VALUE="val05"> Val-05 </SELECT> </FORM> POST data: fld01=val01&fld02=val02&fld03=on&fld04=val04As you can see, POST data is submitted in the form of a continuous string identifying each field/value pair separated by an equal sign (“=”). Each field/value entity is separated by an ampersand (“&”).
There are actually other complexities involved in the format for passing in POST data. Many characters need to be “escaped” in order not to confuse the web server with control characters or separator breaks. This is solved by inserting plus signs (“+”) for spaces and escape sequences of the format “%[0-9,A-F][0-9,A-F]” in the place of non-printable characters, ampersands, pluses and equal signs. (For the purist, yes, it is true that spaces can be represented as either the “+” symbol or as the escape sequence “%20”. All versions of Apache and IIS with which I have worked accept both.)
The web browser passes GET and POST data to the external thread by different mechanisms. GET data is placed in an environment variable visible to the context local to that thread. This environment variable is “QUERY_STRING”. Therefore, gaining access to GET data in C/C++ is a simple matter of this command:
char *pszGetData = getenv("QUERY_STRING");
(This should work in all UNIX and in all Microsoft development environments.)
On the other hand, POST data is passed to the external thread on the standard input stream. For those of you not familiar with streams, it is the same data producer as the keyboard, so whatever it is you've been doing to read input from the keyboard, that is the mechanism you will use to access POST data. However, an inherent liability with streams is that you have no way of knowing how much data is waiting for you. The obvious solution is to keep reading byte by byte from that stream until there is no more data and keep resizing a dynamically allocated buffer accordingly. However, web browsers provide the developer with another piece of information that saves them the trouble of having to grow buffers, incur the added overhead and deal with handling the exceptions that occur when one of multiple dynamic memory allocations decides to fail.
When a web browser passes POST data to the standard input of an external thread, it places all of the POST data there in one shot; therefore, there is never any chance that additional data will be added to the stream after you have first encountered a data portion on that stream.
The web browser also tells the process just how much data it has placed in the standard input by writing (as ASCII text) the number of bytes waiting to be read from the standard input in an environment variable visible to the context local to that thread. This environment variable is “CONTENT_LENGTH”. Therefore, gaining access to POST data in C/C++ requires a three-step process that will work in all UNIX and Microsoft development environments:
long iContentLength = atol(getenv("CONTENT_LENGTH")); char *szFormData = (char *) malloc(iContentLength * sizeof(char)); bzero(szFormData, iContentLength * sizeof(char)); fread(szFormData, (iContentLength - 1) * sizeof(char), 1, stdin);
A given browser document can contain both GET and POST data; therefore, both mechanisms may be used at the same time. In the <FORM> tag, a target may be specified which contains GET data, and the data contained in the form will be passed to the target as POST data.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide