Relinking a Multi-Page Web Document
There is something magical about writing a web-based document that just doesn't exist with a regular linear document. Something about getting all those links just right and in the right sequence makes a web document come alive. Of course, getting the links just right can be a big job, especially in a document with many pages. I found that out when I tackled my first multi-page document.
I had been writing HTML for several months when an opportunity came to make a presentation at our local Internet Special Interest Group (part of a larger PC users group). At that time, only a few of us were “on the Net”, but many people were interested in what the Internet—and more specifically—what the Web could do for them. I volunteered to give a talk on the basics of HTML and putting together your own web page.
The group met in the library of a local university, and we had a live Internet connection tied into an overhead projector in the room. I decided it would be neat to write a presentation about HTML in HTML. Each web page would be a single slide in the presentation. Links between pages would allow me to move forward (and backward) as the talk progressed.
So I put together about 15 pages of slides and linked them so each page had a next link to the next page and a prev link to the previous page. I put these links at the top and bottom of each page, so there were four links on every page (actually, I had links to the table of contents too, but let's ignore those for the moment). Figure 1 shows how consecutive pages are linked.
The talk went well, but I saw several places where I could improve the talk. When I started adding pages to the document, I made a very important discovery: inserting pages was a big pain. If I wanted to insert a new page between existing pages A and B. I had to update the NEXT links in page A, update the PREV links in page B, and update both the NEXT and PREV links in the new page. And because I had links at the top and bottom of the pages, there were twice as many links to update. Figure 2 shows the revised links.
After struggling with manual updates to the pages, I decided there had to be a better way. The relink Perl script was a result of that frustration.
Using relink is simple. First you need a file (called links) containing a list of pages in the order they are to be visited. Omit the “.html” portion of the page name in the links file, relink assumes the files end with that extension.
For example, consider the following (very abbreviated) version of my original HTML presentation. I start with an introduction (intro.html), have a page about anchors (anchor.html) and finish with a conclusion (conclude.html). The links file would contain:
# Pages for a simple presentation intro anchor conclude
Each HTML page contains a set of links to its next and previous page. For example, the anchor.html file contains the following links at the top and bottom of the page.
<a href="conclude.html"> <img src="icons/next.gif" alt="NEXT"> </a> <a href="intro.html"> <img src="icons/prev.gif" alt="PREV"> </a>After reviewing my short document, I feel that I really should mention URLs and how they work before delving into anchors. So I write a new page called url.html and wish to add it to my document. I simply edit the links file to contain:
# Pages for an updated, but still # simple presentation intro url anchor concludeAfter running relink with the new page order, the links in the anchor page will now look like:
<a href="conclude.html"> <img src="icons/next.gif" alt="NEXT"> </a> <a href="url.html"> <img src="icons/prev.gif" alt="PREV"> </a>Notice the previous link now points to the page about URLs, rather than the introduction. The links in the other pages are updated in a consistent manner to support the new page order. Pages can be added, deleted, or simply rearranged just by editing the links file and specifying the new order.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide