Working with LWP
Most of the time, this column discusses ways in which we can improve or customize the work done by web servers. Whether we are working with CGI programs or mod_perl modules, we are usually looking at things from the server's perspective.
This month, we will look at LWP, the “library for web programming” available for Perl, along with several associated modules. The programs we will write will be web clients, rather than web servers. Server-side programs receive HTTP requests and generate HTTP responses; our programs this month will generate the requests and wait for the responses generated by the server.
As we examine these modules, we will gain a better understanding of how HTTP works, as well as how to use the various modules of LWP to construct all sorts of programs that retrieve and sort information stored on the Web.
HTTP, the “hypertext transfer protocol”, makes the Web possible. HTTP is one of many protocols used on the Internet and is considered a high-level protocol, alongside SMTP (the simple mail transfer protocol) and FTP (file transfer protocol). These are considered high-level protocols because they sit on a foundation of lower-level protocols that handle the more mundane aspects of networking. HTTP messages don't have to worry about handling dropped packets and routing, because TCP and IP take care of such things for it. If there is a problem, it will be taken care of at a lower level.
Dividing problems up in this way allows you to concentrate on the important issues, without being distracted by the minute details. If you had to think about your car's internals every time you wanted to drive somewhere, you would quickly find yourself concentrating on too many things at once and unable to perform the task at hand. By the same token, HTTP and other high-level protocols can ignore the low-level details of how the network operates, and simply assume the connection between two computers will work as advertised.
HTTP operates on a client-server model, in which the computer making the request is known as the client, and the computer receiving the request and issuing a response is the server. In the world of HTTP, servers never speak before they are spoken to—and they always get the last word. This means a client's request can never depend on the server's response; a client interested in using a previous response to form a new request must open a new connection.
Given all of that theory, how does HTTP work in practice? You can experiment for yourself, using the simple telnet command. telnet is normally used to access another computer remotely, by typing:
That demonstrates the default behavior, in which telnet opens a connection to port 23, the standard port for such access. You can use telnet to connect to other ports as well, and if there is a server running there, you can even communicate with it.
Since HTTP servers typically run on port 80, I can connect to one with the command:
telnet www.lerner.co.il 80
I get the following response on my Linux box:
Trying 22.214.171.124... Connected to www.lerner.co.il. Escape character is '^]'.Once we have established this connection, it is my turn to talk. I am the client in this context, which means I must issue a request before the server will issue any response. HTTP requests consist, at minimum, of a method, an object on which to apply that method, and an HTTP version number. For instance, we can retrieve the contents of the file at / by typing
GET / HTTP/1.0This indicates we want the file at / to be returned to us, and that the highest-numbered version of HTTP we can handle is HTTP/1.0. If we were to indicate that we support HTTP/1.1, an advanced server would respond in kind, allowing us to perform all sorts of nifty tricks.
If you pressed return after issuing the above command, you are probably still waiting to receive a response. That's because HTTP/1.0 introduced the idea of “request headers”, additional pieces of information that a client can pass to a server as part of a request. These client headers can include cookies, language preferences, the previous URL this client visited (the “referer”) and many other pieces of information.
Because we will stick with a simple GET request, we press return twice after our one-line command: once to end the first line of our request, and another to indicate we have nothing more to send. As with e-mail messages, a blank line separates the headers—information about the message—from the message itself.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Firefox 46.0 Released
- Ubuntu Online Summit
- Devuan Beta Release
- The Qt Company's Qt Start-Up
- May 2016 Issue of Linux Journal
- The US Government and Open-Source Software
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- The Death of RoboVM
- Open-Source Project Secretly Funded by CIA
- New Container Image Standard Promises More Portable Apps
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide