A Web Crawler in Perl
Once the spider has downloaded the HTML source for a web page, we can scan it for text matching the search phrase and notify the user if we find a match.
We can also find any hypertext links embedded in the page and use them as a starting point for a further search. This is exactly what the spider program does; it scans the HTML content for anchor tags of the form <A HREF="url"> and adds any links it finds to its queue of URLs.
A hyperlink in an HTML page can be in one of several forms. Some of these must be combined with the URL of the page in which they're embedded to get a complete URL. This is done by the fqURL() function. It combines the URL of the current page and the URL of a hyperlink found in that page to produce a complete URL for the hyperlink.
For example, here are some links which might be found in a fictitious web page at http://www.ddd.com/clients/index.html, together with the resulting URL produced by fqURL().
URL in Anchor Tag
As these examples show, the spider can handle both a fully-specified URL and a URL with only a document name. When only a document name is given, it can be either a fully qualified path or a relative path. In addition, the spider can handle URLs with port numbers embedded, e.g., http://www.ddd.com:1234/index.html.
One function not implemented in fqURL() is the stripping of back-references (../) from a URL. Ideally, the URL /test/.../index.html is translated to /index.html, and we know that both point to the same document.
Once we have a fully-specified URL for a hyperlink, we can add it to our queue of URLs to be scanned. One concern that crops up is how to limit our search to a given subset of the Internet. An unrestricted search would end up downloading a good portion of the world-wide Internet content—not something we want to do to our compadres with whom we share network bandwidth. The approach spider.pl takes is to discard any URL that does not have the same host name as the beginning URL; thus, the spider is limited to a single host. We could also extend the program to specify a set of legal hosts, allowing a small group of servers to be searched for content.
Another issue that arises when handling the links we've found is how to prevent the spider from going in circles. Circular hyperlinks are very common on the Web. For example, page A has a link to page B, and page B has a link back to page A. If we point our spider at page A, it finds the link to B and checks it out. On B it finds a link to A and checks it out. This loop continues indefinitely. The easiest way to avoid getting trapped in a loop is to keep track of where the spider has been and ensure that it doesn't return. Step 2 in the algorithm shown at the beginning of this article suggests that we “pull a URL out of our queue” and visit it. The spider program doesn't remove the URL from the queue. Instead, it marks that URL as having been scanned. If the spider later finds a hyperlink to this URL, it can ignore it, knowing it has already visited the page. Our URL queue holds both visited and unvisited URLs.
The set of pages the spider has visited will grow steadily, and the set of pages it has yet to visit can grow and and shrink quickly, depending on the number of hyperlinks found in each page. If a large site is to be traversed you may need to store the URL queue in a database, rather than in memory as we've done here. The associative array that holds the URL queue, %URLqueue, could easily be linked to a GDBM database with the Perl 4 functions dbmopen() and dbmclose() or Perl 5 functions tie() and untie().
Note that you should not unleash this beast on the Internet at large, not only because of the bandwidth it consumes, but also because of Internet conventions. The document request the spider sends is a one line GET request. To strictly follow the HTTP protocol, it should also include User-Agent and From fields, giving the remote server the opportunity to deny our request and/or collect statistics.
This program also ignores the “robots.txt” convention that is used by administrators to deny access to robots. The file /robots.txt should be checked before any further scanning of a host. This file indicates if scanning from a robot is welcome and declares any subdirectories that are off-limits. A robots.txt file that excludes scanning of only 2 directories looks like this:
Useuagent: * Disallow: /tmp/ Disallow: /cgi-bin/
A file that prohibits all scanning on a particular web server looks like this:
User-agent: * Disallow: /Robots like our spider can place a heavy load on a web server, and we don't wish to use it on servers that have been declared off-limits to robots by their administrators
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- New Products
- Users, Permissions and Multitenant Sites
- Flexible Access Control with Squid Proxy
- High-Availability Storage with HA-LVM
- Security in Three Ds: Detect, Decide and Deny
- DevOps: Everything You Need to Know
- Tighten Up SSH
- Non-Linux FOSS: MenuMeters
- Solving ODEs on Linux
- Android Candy: Bluetooth Auto Connect