A Web Crawler in Perl

 in
Here's how spiders search the Web collecting information for you.

Web-crawling robots, or spiders, have a certain mystique among Internet users. We all use search engines like Lycos and Infoseek to find resources on the Internet, and these engines use spiders to gather the information they present to us. Very few of us, however, actually use a spider program directly.

Spiders are network applications which traverse the Web, accumulating statistics about the content found. So how does a web spider work? The algorithm is straightforward:

  1. Create a queue of URLs to be searched beginning with one or more known URLs.

  2. Pull a URL out of the queue and fetch the Hypertext Markup Language (HTML) page which can be found at that location.

  3. Scan the HTML page looking for new-found hyperlinks. Add the URLs for any hyperlinks found to the URL queue.

  4. If there are URLs left in the queue, go to step 2.

Listing 1 is a program, spider.pl, which implements the above algorithm in Perl. This program should run on any Linux system with Perl version 4 or higher installed. Note that all code mentioned in this article assumes Perl is installed in /usr/bin/Perl. These scripts are available for download on my web page at http://www.javanet.com/~thomas/.

To run the spider at the shell prompt use the command:

spider.pl <starting-URL<search-phrase>

The spider will commence the search. The starting URL must be fully specified, or it may not parse correctly. The spider searches the initial page and all its descendant pages for the given search phrase. The URL of any page with a match is printed. To print a list of URLs from the SSC site containing the phrase “Linux Journal”, type:

spider.pl http://www.ssc.com/ "Linux Journal"
The Perl variable $DEBUG, defined in the first few lines of spider.pl, is used to control the amount of output the spider produces. $DEBUG can range from 0 (matching URLs are printed) to 2 (status of the program and dumps of internal data structures are output).

Interaction with the Internet

The most interesting thing about the spider program is the fact that it is a network program. The subroutine get_http() encapsulates all the network programming required to implement a spider; it does the “fetch” alluded to in step 2 of the above algorithm. This subroutine opens a socket to a server and uses the HTTP protocol to retrieve a page. If the server has a port number appended to it, this port is used to establish the connection; otherwise, the well-known port 80 is used.

Once a connection to the remote machine has been established, get_http() sends a string such as:

GET /index.html HTTP/1.0

This string is followed by two newline characters. This is a snippet of the Hypertext Transport Protocol (HTTP), the protocol on which the Web is based. This request asks the web server to which we are connected to send the contents of the file /index.html to us. get_http() then reads the socket until an end of file is encountered. Since HTTP is a connectionless protocol, this is the extent of the conversation. We submit a request, the web server sends a response and the connection is terminated.

The response from the web server consists of a header, as specified by the HTTP standard, and the HTML-tagged text making up the page. These two parts of the response are separated by a blank line. Running the spider at debug level 2 will display the HTTP headers for you as a page is fetched. The following is a typical response from a web server.

HTTP/1.0 200 OK
Date: Tue, 11 Feb 1997 21:54:05 GMT
Server: Apache/1.0.5
Content-type: text/html
Content-length: 79
Last-modified: Fri, 22 Nov 1996 10:11:48 GMT
<HTML><TITLE>My Web Page</TITLE>
<BODY>
This is my web page.
</BODY>
</HTML>

The spider program checks the Content-type field in the HTTP header as it arrives. If the content is of any MIME type other than text/html or text/plain, the download is aborted. This avoids the time-consuming download of things like .Z and .tar.gz files, which we don't wish to search. While most sites use the FTP protocol to transfer this type of file, more and more sites are using HTTP.

There is a hardware dependency in get_http() that you should be aware of if you are running Linux on a SPARC or Alpha. When building the network addresses for the socket, the Perl pack() routine is used to encode integer data. The line:

$sockaddr="S n a4 x8";

is suitable only for 32-bit CPUs. To get around this, see Mike Mull's article “Perl and Sockets” in LJ Issue 35.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

code download

Bejjan's picture

Does anyone have the source ? because the url in the article doesnt work anymore.

/Jimmy

Search engine componants

dhanesh mane's picture

I am working on Search engine architecture. when I search for search engine basic architecture then there r lots of diff images and details available.

Still I am searching for generic search engine architecture and details of each and every component of it. please let me know on this.

where to download spider.pl

Anonymous's picture

is there anyone one who have a version of this script working? Even though it doesn't through any error, I don't get any result. Thanks ahead.
S.

error while running script

Anonymous's picture

when i run this script on my ubuntu, i got the folowing error :

ERROR: Unknown host www.ssc.com.

please anyone help me

thx spider.pl

pingo's picture

It is year 2008 and I just saw your spider.pl. I love it because it contains rich technical information I was looking for. I know Perl but not enough! I tried to run the program from a winows Me through explorer, the program sends the "GET /$document HTTP/1.0

" but it does not get the response back! Do I need to configure the explorer or do something? By the way you said the program runs for Linux! It is running allright with Me so far! Could you please tell me why I can not get any response back?

Thanks a lot to people like you.

Personal:

I see you are telecomuting. I just start doing this. If you need help please feel free to see my site:

www.softwarefreedown.com

How to set proxy

Kinshuk Chandra's picture

Mike I tried to run the program but i got the error , unknown host.
So plz tell me how to set the proxy in your spider.pl.
Thx in advance

The spider application

Norton Security's picture

Thanks Mike for this great tutorial. However, when I am trying to access the spider.pl program from the location you provided, it fails saying there's no such directory on the server. Could you please correct?

Thanks,
SecurityBay

urgent

Lassaad's picture

Good Morning ,

I whould download your script spider.pl from http://www.javanet.com/~thomas/. but this website is not actif please if you can send me this script on my mail ing.lassaad@hotmail.com
or skype ytlassaad

think you very much ...

Im very sorry for my inglish I speak French...
Goog day.

Good job. foritmail3@hotmail

Forti's picture

Good job.
foritmail3@hotmail.com
Thanks.

Good job, thanks.

Anonymous's picture

Good job, thanks.

Re: A Web Crawler in Perl

Anonymous's picture

Hi Mike;

It is year 2002 and I just saw your spider.pl. I love it because it contains rich technical information I was looking for. I know Perl but not enough! I tried to run the program from a winows Me through explorer, the program sends the "GET /$document HTTP/1.0

" but it does not get the response back! Do I need to configure the explorer or do something? By the way you said the program runs for Linux! It is running allright with Me so far! Could you please tell me why I can not get any response back?

Thanks a lot to people like you.

Personal:

I see you are telecomuting. I just start doing this. If you need help please feel free to see my site:

www.softek-inc.com

Regards

Beheen Trimble

SW Engineer

Kinda old

Anonymous's picture

But still good.
Thanks.

Kinda old

Anonymous's picture

But still good.
Thanks.

./spider.pl

Anonymous's picture

./spider.pl http://www.ssc.com/ "Linux Journal"
syntax error at ./spider.pl line 236, near ">>>>"
syntax error at ./spider.pl line 250, near "line)"
syntax error at ./spider.pl line 265, near "elsif"
syntax error at ./spider.pl line 269, near "else"
Execution of ./spider.pl aborted due to compilation errors.

Why do i get these errors?

I get the same errors

Anonymous's picture

the style sucks so bad it lost a "}" at the end of the file.
change the ">>>>>>>=" to just ">="

Unfortunately, the running version of this script still sucks.

Sucking Script

Anonymous's picture

Yeah well this script may work well, however you should really check your syntax before positing.

Like the guy above, I get the following erros:

syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 236, near ">>>>"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 250, near "line)"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 265, near "elsif"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 269, near "else"

once there resolved, I still get :

Missing right curly or square bracket at D:\Source\perl\TestArea\image_finder\igrab2.pl line 273, at end of line
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 273, at EOF
Execution of D:\Source\perl\TestArea\image_finder\igrab2.pl aborted due to compilation errors.

Better luck next time.

EOF

Anonymous's picture

just add a } at the end coz it's missing!

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState