A Web Crawler in Perl
Web-crawling robots, or spiders, have a certain mystique among Internet users. We all use search engines like Lycos and Infoseek to find resources on the Internet, and these engines use spiders to gather the information they present to us. Very few of us, however, actually use a spider program directly.
Spiders are network applications which traverse the Web, accumulating statistics about the content found. So how does a web spider work? The algorithm is straightforward:
Create a queue of URLs to be searched beginning with one or more known URLs.
Pull a URL out of the queue and fetch the Hypertext Markup Language (HTML) page which can be found at that location.
Scan the HTML page looking for new-found hyperlinks. Add the URLs for any hyperlinks found to the URL queue.
If there are URLs left in the queue, go to step 2.
Listing 1 is a program, spider.pl, which implements the above algorithm in Perl. This program should run on any Linux system with Perl version 4 or higher installed. Note that all code mentioned in this article assumes Perl is installed in /usr/bin/Perl. These scripts are available for download on my web page at http://www.javanet.com/~thomas/.
To run the spider at the shell prompt use the command:
The spider will commence the search. The starting URL must be fully specified, or it may not parse correctly. The spider searches the initial page and all its descendant pages for the given search phrase. The URL of any page with a match is printed. To print a list of URLs from the SSC site containing the phrase “Linux Journal”, type:
spider.pl http://www.ssc.com/ "Linux Journal"The Perl variable $DEBUG, defined in the first few lines of spider.pl, is used to control the amount of output the spider produces. $DEBUG can range from 0 (matching URLs are printed) to 2 (status of the program and dumps of internal data structures are output).
The most interesting thing about the spider program is the fact that it is a network program. The subroutine get_http() encapsulates all the network programming required to implement a spider; it does the “fetch” alluded to in step 2 of the above algorithm. This subroutine opens a socket to a server and uses the HTTP protocol to retrieve a page. If the server has a port number appended to it, this port is used to establish the connection; otherwise, the well-known port 80 is used.
Once a connection to the remote machine has been established, get_http() sends a string such as:
GET /index.html HTTP/1.0
This string is followed by two newline characters. This is a snippet of the Hypertext Transport Protocol (HTTP), the protocol on which the Web is based. This request asks the web server to which we are connected to send the contents of the file /index.html to us. get_http() then reads the socket until an end of file is encountered. Since HTTP is a connectionless protocol, this is the extent of the conversation. We submit a request, the web server sends a response and the connection is terminated.
The response from the web server consists of a header, as specified by the HTTP standard, and the HTML-tagged text making up the page. These two parts of the response are separated by a blank line. Running the spider at debug level 2 will display the HTTP headers for you as a page is fetched. The following is a typical response from a web server.
HTTP/1.0 200 OK Date: Tue, 11 Feb 1997 21:54:05 GMT Server: Apache/1.0.5 Content-type: text/html Content-length: 79 Last-modified: Fri, 22 Nov 1996 10:11:48 GMT <HTML><TITLE>My Web Page</TITLE> <BODY> This is my web page. </BODY> </HTML>
The spider program checks the Content-type field in the HTTP header as it arrives. If the content is of any MIME type other than text/html or text/plain, the download is aborted. This avoids the time-consuming download of things like .Z and .tar.gz files, which we don't wish to search. While most sites use the FTP protocol to transfer this type of file, more and more sites are using HTTP.
There is a hardware dependency in get_http() that you should be aware of if you are running Linux on a SPARC or Alpha. When building the network addresses for the socket, the Perl pack() routine is used to encode integer data. The line:
$sockaddr="S n a4 x8";
is suitable only for 32-bit CPUs. To get around this, see Mike Mull's article “Perl and Sockets” in LJ Issue 35.
|PasswordPing Ltd.'s Exposed Password and Credentials API Service||Apr 28, 2017|
|Graph Any Data with Cacti!||Apr 27, 2017|
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
- Graph Any Data with Cacti!
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
- IGEL Universal Desktop Converter
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- PasswordPing Ltd.'s Exposed Password and Credentials API Service