Debugging Web Sites
Finally Some Scripting
To identify these rewrite fails, I had to create a script—and fast. After all, while the internal linkages still might work, the thousands of external links from sites like Popular Science, the Wall Street Journal, Wired and elsewhere were not broken. Yikes—not good at all.
I started out on the command line with one that I knew failed. Here's what happened when I used curl to grab a bad URL on the new site:
$ curl http://www.askdavetaylor.com/ ↪schedule-facebook-photo-upload-to-my-fan-page.html | head -5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<!DOCTYPE html> <html lang="en-US"> <head> <meta charset="UTF-8" /> <h3>Nothing found for Schedule-A-Facebook-Photo-Upload-To-My-Fan-Page</h3> 100 31806 0 31806 0 0 110k 0 --:--:-- --:--:-- --:--:-- 110k curl: (23) Failed writing body (0 != 754)
Ugh, what a mess this is, and it's not surprising because I forgot to
-silent flag to
curl when I invoked it.
Still, there's enough displayed here to provide a big clue. It's a 404 error page, as expected, and the <h3> indicates just that:
<h3>Nothing found for ...
So that's an easy pattern to search for:
curl -silent URL | grep '<h3>Nothing found for'
That does the trick. If the output is non-zero, the link failed and generated a 404 error, but if the link worked, it'll be the proper title of the article, and the words "Nothing found for" will appear.
That's most of the needed logic for the script. The only other step is to simulate the rewrite rule so that all the links that do work aren't flagged as a problem. Easy:
newname="$(echo $name | sed 's/\.html/\//')"
This is a super-common sequence that I use in scripts, actually with a
$( ) echoing a variable's current value, just to
push it through a
sed substitution, in this case replacing
.html with a trailing slash (which needs to be escaped with a
leading backslash, hence the complexity of the pattern).
Wrap this in a for loop that steps through all possible *.html files, and here's what it looks like:
for name in *.html ; do newname="$(echo $name | sed 's/\.html/\//')" test=$($curl $base/$newname | grep "$pattern") if [ -n "$test" ] then echo "* URL $base/$name fails to resolve." fi done
That's boring though, because while I'm at it, I'd like to know how many URLs were tested and how many errors were encountered. I mean, why not, right? Quantification = good.
It's easily added, as it turns out, with the addition of two new variables (both of which need to be set to zero at the top of the script):
for name in *.html ; do newname="$(echo $name | sed 's/\.html/\//')" test=$($curl $base/$newname | grep "$pattern") if [ -n "$test" ] ; then echo "* URL $base/$name fails to resolve." error=$(( $error + 1 )) fi count=$(( $count + 1 )) done
Then at the very end of the script, after all the specific errors are reported, a status update:
echo ""; echo "Checked $count links, found $error problems."
Great. Let's run it:
$ bad-links.sh | tail -5 * URL http://www.askdavetaylor.com/whats_a_fast_way_to_add_a_ ↪store_and_shopping_cart_to_my_site.html fails to resolve. * URL http://www.askdavetaylor.com/whats_amazons_simple_ ↪storage_solution_s3.html fails to resolve. * URL http://www.askdavetaylor.com/whats_my_yahoo_ ↪account_password_1.html fails to resolve. * URL http://www.askdavetaylor.com/youtube_video_ ↪missing_hd_resolution.html fails to resolve. Checked 3658 links, found 98 problems.
Phew. Now I know the special cases and can apply custom 301 redirects to fix them. By the time you read this article, all will be well on the site (or better be).
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Webinar: 8 Signs You’re Beyond Cron
11am CDT, April 29th
Join Linux Journal and Pat Cameron, Director of Automation Technology at HelpSystems, as they discuss the eight primary advantages of moving beyond cron job scheduling. In this webinar, you’ll learn about integrating cron with an enterprise scheduler.Join us!