A Web Crawler in Perl

 in
Here's how spiders search the Web collecting information for you.
The URL Queue

Once the spider has downloaded the HTML source for a web page, we can scan it for text matching the search phrase and notify the user if we find a match.

We can also find any hypertext links embedded in the page and use them as a starting point for a further search. This is exactly what the spider program does; it scans the HTML content for anchor tags of the form <A HREF="url"> and adds any links it finds to its queue of URLs.

A hyperlink in an HTML page can be in one of several forms. Some of these must be combined with the URL of the page in which they're embedded to get a complete URL. This is done by the fqURL() function. It combines the URL of the current page and the URL of a hyperlink found in that page to produce a complete URL for the hyperlink.

For example, here are some links which might be found in a fictitious web page at http://www.ddd.com/clients/index.html, together with the resulting URL produced by fqURL().

URL in Anchor Tag

Resulting URL

http://www.eee.org/index.html

http://www.eee.org/index.html

att.html

http://www.ddd.com/clients/att.html

/att.html

http://www.ddd.com/att.html

As these examples show, the spider can handle both a fully-specified URL and a URL with only a document name. When only a document name is given, it can be either a fully qualified path or a relative path. In addition, the spider can handle URLs with port numbers embedded, e.g., http://www.ddd.com:1234/index.html.

One function not implemented in fqURL() is the stripping of back-references (../) from a URL. Ideally, the URL /test/.../index.html is translated to /index.html, and we know that both point to the same document.

Once we have a fully-specified URL for a hyperlink, we can add it to our queue of URLs to be scanned. One concern that crops up is how to limit our search to a given subset of the Internet. An unrestricted search would end up downloading a good portion of the world-wide Internet content—not something we want to do to our compadres with whom we share network bandwidth. The approach spider.pl takes is to discard any URL that does not have the same host name as the beginning URL; thus, the spider is limited to a single host. We could also extend the program to specify a set of legal hosts, allowing a small group of servers to be searched for content.

Another issue that arises when handling the links we've found is how to prevent the spider from going in circles. Circular hyperlinks are very common on the Web. For example, page A has a link to page B, and page B has a link back to page A. If we point our spider at page A, it finds the link to B and checks it out. On B it finds a link to A and checks it out. This loop continues indefinitely. The easiest way to avoid getting trapped in a loop is to keep track of where the spider has been and ensure that it doesn't return. Step 2 in the algorithm shown at the beginning of this article suggests that we “pull a URL out of our queue” and visit it. The spider program doesn't remove the URL from the queue. Instead, it marks that URL as having been scanned. If the spider later finds a hyperlink to this URL, it can ignore it, knowing it has already visited the page. Our URL queue holds both visited and unvisited URLs.

The set of pages the spider has visited will grow steadily, and the set of pages it has yet to visit can grow and and shrink quickly, depending on the number of hyperlinks found in each page. If a large site is to be traversed you may need to store the URL queue in a database, rather than in memory as we've done here. The associative array that holds the URL queue, %URLqueue, could easily be linked to a GDBM database with the Perl 4 functions dbmopen() and dbmclose() or Perl 5 functions tie() and untie().

Responsible Use

Note that you should not unleash this beast on the Internet at large, not only because of the bandwidth it consumes, but also because of Internet conventions. The document request the spider sends is a one line GET request. To strictly follow the HTTP protocol, it should also include User-Agent and From fields, giving the remote server the opportunity to deny our request and/or collect statistics.

This program also ignores the “robots.txt” convention that is used by administrators to deny access to robots. The file /robots.txt should be checked before any further scanning of a host. This file indicates if scanning from a robot is welcome and declares any subdirectories that are off-limits. A robots.txt file that excludes scanning of only 2 directories looks like this:

Useuagent: *
Disallow: /tmp/
Disallow: /cgi-bin/

A file that prohibits all scanning on a particular web server looks like this:

User-agent: *
Disallow: /
Robots like our spider can place a heavy load on a web server, and we don't wish to use it on servers that have been declared off-limits to robots by their administrators

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

code download

Bejjan's picture

Does anyone have the source ? because the url in the article doesnt work anymore.

/Jimmy

Search engine componants

dhanesh mane's picture

I am working on Search engine architecture. when I search for search engine basic architecture then there r lots of diff images and details available.

Still I am searching for generic search engine architecture and details of each and every component of it. please let me know on this.

where to download spider.pl

Anonymous's picture

is there anyone one who have a version of this script working? Even though it doesn't through any error, I don't get any result. Thanks ahead.
S.

error while running script

Anonymous's picture

when i run this script on my ubuntu, i got the folowing error :

ERROR: Unknown host www.ssc.com.

please anyone help me

thx spider.pl

pingo's picture

It is year 2008 and I just saw your spider.pl. I love it because it contains rich technical information I was looking for. I know Perl but not enough! I tried to run the program from a winows Me through explorer, the program sends the "GET /$document HTTP/1.0

" but it does not get the response back! Do I need to configure the explorer or do something? By the way you said the program runs for Linux! It is running allright with Me so far! Could you please tell me why I can not get any response back?

Thanks a lot to people like you.

Personal:

I see you are telecomuting. I just start doing this. If you need help please feel free to see my site:

www.softwarefreedown.com

How to set proxy

Kinshuk Chandra's picture

Mike I tried to run the program but i got the error , unknown host.
So plz tell me how to set the proxy in your spider.pl.
Thx in advance

The spider application

Norton Security's picture

Thanks Mike for this great tutorial. However, when I am trying to access the spider.pl program from the location you provided, it fails saying there's no such directory on the server. Could you please correct?

Thanks,
SecurityBay

urgent

Lassaad's picture

Good Morning ,

I whould download your script spider.pl from http://www.javanet.com/~thomas/. but this website is not actif please if you can send me this script on my mail ing.lassaad@hotmail.com
or skype ytlassaad

think you very much ...

Im very sorry for my inglish I speak French...
Goog day.

Good job. foritmail3@hotmail

Forti's picture

Good job.
foritmail3@hotmail.com
Thanks.

Good job, thanks.

Anonymous's picture

Good job, thanks.

Re: A Web Crawler in Perl

Anonymous's picture

Hi Mike;

It is year 2002 and I just saw your spider.pl. I love it because it contains rich technical information I was looking for. I know Perl but not enough! I tried to run the program from a winows Me through explorer, the program sends the "GET /$document HTTP/1.0

" but it does not get the response back! Do I need to configure the explorer or do something? By the way you said the program runs for Linux! It is running allright with Me so far! Could you please tell me why I can not get any response back?

Thanks a lot to people like you.

Personal:

I see you are telecomuting. I just start doing this. If you need help please feel free to see my site:

www.softek-inc.com

Regards

Beheen Trimble

SW Engineer

Kinda old

Anonymous's picture

But still good.
Thanks.

Kinda old

Anonymous's picture

But still good.
Thanks.

./spider.pl

Anonymous's picture

./spider.pl http://www.ssc.com/ "Linux Journal"
syntax error at ./spider.pl line 236, near ">>>>"
syntax error at ./spider.pl line 250, near "line)"
syntax error at ./spider.pl line 265, near "elsif"
syntax error at ./spider.pl line 269, near "else"
Execution of ./spider.pl aborted due to compilation errors.

Why do i get these errors?

I get the same errors

Anonymous's picture

the style sucks so bad it lost a "}" at the end of the file.
change the ">>>>>>>=" to just ">="

Unfortunately, the running version of this script still sucks.

Sucking Script

Anonymous's picture

Yeah well this script may work well, however you should really check your syntax before positing.

Like the guy above, I get the following erros:

syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 236, near ">>>>"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 250, near "line)"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 265, near "elsif"
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 269, near "else"

once there resolved, I still get :

Missing right curly or square bracket at D:\Source\perl\TestArea\image_finder\igrab2.pl line 273, at end of line
syntax error at D:\Source\perl\TestArea\image_finder\igrab2.pl line 273, at EOF
Execution of D:\Source\perl\TestArea\image_finder\igrab2.pl aborted due to compilation errors.

Better luck next time.

EOF

Anonymous's picture

just add a } at the end coz it's missing!

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState