Scripting GNU in the 21st Century

Scripting in the GNU environment and parsing HTML in bash.
Parsing HTML

HTML is a nested data format and often doesn't lend itself to the sort of tabular data processing at which shell tools excel. Each tag or chunk of data comes wrapped in a surrounding context, requiring more programming work to analyze the structure.

Fortunately, a tool already exists that represents nested structures in a format that's easy for shell scripts to manage: the find utility. Given a tree of directories and files, it prints output like the following:


	work/
	work/tmp
	work/NOTES
	work/outgoing
	work/outgoing/e-mail
	work/outgoing/done.txt
	work/incoming
	work/incoming/TODO

Dan Egnor has written a similar tool for HTML and XML called xml2. Given a stream of HTML tags such as <html><body><a href="http://linuxjournal.com">Linux Journal</a></body></html>, it prints the following output:


	/html/body/a/@href=http://linuxjournal.com
	/html/body/a=Linux Journal

Being Selective

I temporarily put submitform | html2 at the bottom of the script to take a look at the resulting data. I looked for the names of stations, times and other bits of information I wished to display. As luck would have it, the HTML was nicely uniform, so it was easy to separate out the important data.

The interesting data is all in table data cells within table rows within a table within a div tag within the body of the document. This means that running


submitform | html2 |  grep /html/body/div/table/tr/td=

prints something like the following for each train:


	/html/body/div/table/tr/td=Rockridge
	/html/body/div/table/tr/td=at 4:34 pm
	/html/body/div/table/tr/td=San Francisco Int'l Airport train
	/html/body/div/table/tr/td=Embarcadero Station
	/html/body/div/table/tr/td=at 4:54 pm
	/html/body/div/table/tr/td=Bikes Allowed

Separating the HTML context from the actual data was as simple as piping the result through cut -d = -f 2 to split off everything before the first =.

The final data extraction function is as follows:


	function extractdata {
		submitform | html2 2> /dev/null | \
		  grep /html/body/div/table/tr/td= | cut -d = -f 2
	}

Formatting Output

In an earlier version of this script, I relied on an external awk program to format the data. The awk language is nice for these kinds of situations, because it has a structure in which you specify a regular expression, or other pattern, and the code to execute when a line of input matches that pattern. Thus, I could write a routine that runs whenever a certain time was encountered or when a note about bicycle rules appeared.

The Bourne shell--yes, even the old classic one--provides us with an awk-like construct that is useful in this situation: the case statement. Combining a while loop and a case test can provide somewhat awk-like scripting features, especially when combined with bash's more advanced string manipulation.

The basic format is that of a series of shell glob patterns separated by pipes (|) and ending with a right-parenthesis. Then comes a set of shell commands, terminated with a double-semicolon (;;) before the next pattern can be specified.

Let's look at the formatting function:


	function formatdata {
		echo -n "Current time:	$(date +'%l:%M%p')"
		echo " (note that first train listed may be in the past)"
		board="Board:"
		while read i
		do
			case $i in
				*train)
					train=$i
					departure=$arrival
					beginning=$destination;;
				at\ *)
					arrival=${i#at };;
				Timed|Transfer)
					read junk
					board="Xfer:";;
				*\ Allowed)
					echo -n "${board}	"
					echo -n "(${departure}) ${beginning} to ${destination} (${arrival}) "
					echo "[${train}] (${i% Allowed})";;
				*)
					destination=$i;;
			esac
		done
	}

In addition to the while loop and the case statement, this portion of the script uses an advanced feature of bash that I learned from Jim Dennis during an SVLUG meeting. ${VARIABLE#PATTERN} cuts off the left side of VARIABLE if it matches PATTERN, and ${VARIABLE%PATTERN} cuts off the right side. The trick to remembering which is which, as Jim Dennis told me, is that the # symbol (Shift+3) is to the left of the % symbol (Shift+5) on a US keyboard. This allows us to strip out unneeded text from our printout without shelling out to sed or awk.

Putting extractdata | formatdata at the bottom of the script verifies that our base functionality is working as it should.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Scripting GNU in the 21st Century

Anonymous's picture

It should be noted that the LSB is POSIX-based.

Re: Scripting GNU in the 21st Century

Anonymous's picture

Overall, for someone learning bash, this is probably a reasonable example. I have many similar ones myself (grabbing satellite wildfeed data, for example).

However, as a means of introducing a newcomer to bash, or as a convincing description of why the newcomer should be using bash, I feel it falls short.

For example, some simple timing shows that his "$(basename $0)" construct is almost 100x slower than using "${0##*/}", although he does use another version of the same construct later, meaing that he is aware of it!

His repeated use of the backslash character as a line continuation does not improve the readability of the script; in fact, it makes it worse. Leave it out! Yes, it still works -- if the line ends with a character which indicates there's more needed, the line continuation character is redundant. The pipe symbol (vertical bar) is such a character.

In general, all variable usage should be enclosed in double quotes (ie, "Dollar signs in Double quotes"). This technique is only wrong 1 time out of 100, so the programmer will be correct 99% of the time. :) Yes, it may mean the double quotes are redundant in some cases, but there's a lot to be said for consistency, and hence, readability. Only when dealing with word splitting (where you want the word split), will the double quotes be incorrect.

He appears to use brackets in "if" statements when the POSIX (?) technique would be double parentheses (brackets are for string comparisons, parens are for numeric comparisons, and the doubled form of each is recommended since they turn off I/O redirection, wildcarding, and word splitting). Maybe he doesn't know?

Lastly, the "liststations" function seems overly complex to me. First, is it really necessary to specify the entire XPath all the way from the "html" element down to the "select" element and its attribute?? I haven't seen the data, but I'd be willing to bet that just the "select" element and attribute would be enough (since select is non-functional outside of forms anyway, and the attribute specifies the name of the select element!). Regardless of whether simplification is possible, change the delimiter of the regexpr! Use something other than a slash and avoid LTS ("leaning toothpick syndrome", per Larry Wall). With the text thus cleaned up visually, maybe let sed also do the elimination of text up to and including the equals sign? That eliminates the need for cut, although it may hurt readability. Additionally, sed can also replace the while loop; tell sed to match on the "select" element and attribute, then read three more lines into the holding space, appending them to whats already there. Now run a substitution on the hold space and print the result. (Or if you're not comfortable with sed, use awk, and you still eliminate the cut and while loop.) YMMV.

Overall, I agree with the other comment posted here that the standard for scripts should be POSIX, not a particular tool. Of course, POSIX has its own problems (quite a few, actually!), but that's a decision that individual organizations need to make: portability vs. speed/usability.

The sh POSIX standard

Anonymous's picture

The sha-bang ( #!) at the head of a script tells your system that this file is a set of commands to be fed to the command interpreter indicated. The #! is actually a two-byte [1] "magic number", a special marker that designates a file type, or in this case an executable shell script (see man magic for more details on this fascinating topic). Immediately following the sha-bang is a path name. This is the path to the program that interprets the commands in the script, whether it be a shell, a programming language, or a utility. This command interpreter then executes the commands in the script, starting at the top (line 1 of the script), ignoring comments. [2]

#!/bin/sh
#!/bin/bash
#!/usr/bin/perl
#!/usr/bin/tcl
#!/bin/sed -f
#!/usr/awk -f

Each of the above script header lines calls a different command interpreter, be it /bin/sh, the default shell (bash in a Linux system) or otherwise. [3] Using #!/bin/sh, the default Bourne Shell in most commercial variants of Unix, makes the script portable to non-Linux machines, though you may have to sacrifice a few Bash-specific features. The script will, however, conform to the POSIX [4] sh standard.

REF:

1) Advanced Bash-Scripting Guide
2) The Single UNIX Specification, Version 3

The Horror

Anonymous's picture

While it's a neat hack to parse HTML using bash, and I respect the authors significant contributions to Free Software (LNX-BBC, GAR - "We're not worthy!"), isn't this really a sign that scripting activities on GNU/Linux (and UNIX systems, if you must) should really be employing proper languages like Python and [insert favourite "agile" language here]?

Re: The Horror

Anonymous's picture

No. These days anything goes and Bash is appropriately qualified. Respect is due to people who enjoy time-tested languages.

Re: Scripting GNU in the 21st Century

Anonymous's picture

Somebody go tell those who are making the 'GNU' autoconf and automake??

Re: Scripting GNU in the 21st Century

Anonymous's picture

.. of course, the enitre point of GNU autotools is to enable you to code programs wuch that they compile regardless of what is and isn't avaliable on the build, host and target platforms, and if we start assuming they're fully GNU compatible it sort of defeats the point a bit, no?

Caution: Theater-Wide Monitor Required (NT)

Anonymous's picture

.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState