I know, I'm in the middle of a series of columns about how to work with ImageMagick on the command line, but when other things arise, well, I imagine that a lot of you are somehow involved in the management of servers or systems, so you all understand firefighting.

Of course, this means you all also understand the negative feedback loop that is an intrinsic part of system administration and IT management. I mean, people don't call you and the CEO doesn't send a memo saying, "system worked all day, printer even printed. Thanks!"

Nope, it's when things go wrong that you hear about them, and that propensity to ignore the good and have to deal with the bad when it crops up is not only a characteristic of being in corporate IT, it's just as true if you're running your own system—which is how it jumped out of the pond and bit me this month.

It all started ten years ago with my Ask Dave Taylor site. You've probably bumped into it, as it's been around for more than a decade and served helpful tutorial information for tens of millions of visitors in that period.

Ten years ago, the choice of Movable Type as my blogging platform made total sense and was a smart alternative to the raw, unfinished WordPress platform with its never-ending parade of hacks and problems. As every corporate IT person knows, however, sometimes you get locked in to the wrong platform and are then stuck, with the work required to migrate becoming greater and greater each month nothing happens.

For the site's tenth anniversary, therefore, it was time. I had to bite the bullet and migrate all 3,800 articles and 56,000 comments from Movable Type to WordPress, because yes, WordPress won and is clearly the industry standard for content management systems today.

The task was daunting, not just because of the size of the import (it required the consulting team rewriting the standard import tool to work with that many articles and comments), but because the naming scheme changed. On Movable Type, I'd always had it set to convert the article's name into a URL like this:

Name: Getting Started with Pinterest

URL: /getting_started_with_pinterest.html

That was easy and straightforward, but on WordPress, URLs have dashes, not underscores, and, more important, they don't end with .html because they're generated dynamically as needed. This means the default URL for the new WordPress site would look like this:

New URL: /getting-started-with-pinterest/

URLs can be mapped upon import so that the default dashes become underscores, but it was the suffix that posed a problem, and post-import there were 3,800 URLs that were broken because every single link to xx_xx.html failed.

Ah! A 301 redirect! Yes, but thousands of redirects slow down the server for everyone, so a rewrite rule is better. Within Apache, you can specify "if you see a URL of the form xx_xx.html, rewrite it to 'xx_xx' and try again", a darn handy capability.

But life is never that easy, because although this rewrite will work for 95% of the URLs on the old site, there were some that just ended up with a different URL because I'd monkeyed with things somewhere along the way. Yeah, there's always something.

For example, the old site URL /schedule_facebook_photo_upload_fan_page.html is now on the server with the URL /schedule-a-facebook-photo-upload-to-my-fan-page/.

That's helpful, right? (Sigh.)

These all can be handled with a 301 redirect, but the question is, out of almost 4,000 article URLs on the old site, which ones don't actually successfully map with the rewrite rule (.html to a trailing slash) to a page on the new server?

Finally Some Scripting

To identify these rewrite fails, I had to create a script—and fast. After all, while the internal linkages still might work, the thousands of external links from sites like Popular Science, the Wall Street Journal, Wired and elsewhere were not broken. Yikes—not good at all.

I started out on the command line with one that I knew failed. Here's what happened when I used curl to grab a bad URL on the new site:


$ curl
http://www.askdavetaylor.com/
↪schedule-facebook-photo-upload-to-my-fan-page.html
| head -5

% Total  % Received % Xferd  Average Speed  Time  Time  Time Current
                             Dload  Upload  Total Spent Left Speed
0     0  0    0     0     0      0     0 --:--:-- --:--:-- --:--:--
0<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8" />
<h3>Nothing found for
Schedule-A-Facebook-Photo-Upload-To-My-Fan-Page</h3>
100 31806   0 31806  0   0  110k  0 --:--:-- --:--:-- --:--:-- 110k
curl: (23) Failed writing body (0 != 754)

Ugh, what a mess this is, and it's not surprising because I forgot to add the -silent flag to curl when I invoked it.

Still, there's enough displayed here to provide a big clue. It's a 404 error page, as expected, and the <h3> indicates just that:


<h3>Nothing found for ...

So that's an easy pattern to search for:


curl -silent URL | grep '<h3>Nothing found for'

That does the trick. If the output is non-zero, the link failed and generated a 404 error, but if the link worked, it'll be the proper title of the article, and the words "Nothing found for" will appear.

That's most of the needed logic for the script. The only other step is to simulate the rewrite rule so that all the links that do work aren't flagged as a problem. Easy:


newname="$(echo $name | sed 's/\.html/\//')"

This is a super-common sequence that I use in scripts, actually with a subshell invocation $( ) echoing a variable's current value, just to push it through a sed substitution, in this case replacing .html with a trailing slash (which needs to be escaped with a leading backslash, hence the complexity of the pattern).

Wrap this in a for loop that steps through all possible *.html files, and here's what it looks like:


for name in *.html ; do
  newname="$(echo $name | sed 's/\.html/\//')"
  test=$($curl $base/$newname | grep "$pattern")
  if [ -n "$test" ]
  then
    echo "* URL $base/$name fails to resolve."
  fi
done

That's boring though, because while I'm at it, I'd like to know how many URLs were tested and how many errors were encountered. I mean, why not, right? Quantification = good.

It's easily added, as it turns out, with the addition of two new variables (both of which need to be set to zero at the top of the script):


for name in *.html ; do
  newname="$(echo $name | sed 's/\.html/\//')"
  test=$($curl $base/$newname | grep "$pattern")
  if [ -n "$test" ] ; then
    echo "* URL $base/$name fails to resolve."
    error=$(( $error + 1 ))
  fi
  count=$(( $count + 1 ))
done

Then at the very end of the script, after all the specific errors are reported, a status update:


echo ""; echo "Checked $count links, found $error problems."

Great. Let's run it:


$ bad-links.sh | tail -5

* URL http://www.askdavetaylor.com/whats_a_fast_way_to_add_a_
↪store_and_shopping_cart_to_my_site.html fails to resolve.

* URL http://www.askdavetaylor.com/whats_amazons_simple_
↪storage_solution_s3.html fails to resolve.

* URL http://www.askdavetaylor.com/whats_my_yahoo_
↪account_password_1.html fails to resolve.

* URL http://www.askdavetaylor.com/youtube_video_
↪missing_hd_resolution.html fails to resolve.

Checked 3658 links, found 98 problems.

Phew. Now I know the special cases and can apply custom 301 redirects to fix them. By the time you read this article, all will be well on the site (or better be).

Dave Taylor has been hacking shell scripts on UNIX and Linux systems for a really long time. He's the author of Learning Unix for Mac OS X and Wicked Cool Shell Scripts. You can find him on Twitter as @DaveTaylor, and you can reach him through his tech Q&A site: Ask Dave Taylor.

Load Disqus comments