XSLT Powers a New Wave of Web Applications
Suppose you're responsible for a web site of tens of thousands of pages. You maintain those pages in an organization-specific XML vocabulary that strips out formatting information and HTML blemishes; your documents hold only the logical content specific to each page. Visitors need HTML, of course, but you generate that automatically, along with standard headers, frames, navigation bars, footnotes and all the other decorations we've come to expect on the Web. XSLT gives you the ability to update site style instantaneously for all the thousands of managed documents. Moreover, it partitions responsibility nicely between XML content files and XSLT stylesheets, so that different specialists can collaborate effectively.
That executive-level description masks quite a bit of implementation variability. Where and when does the XSLT transformation take place? You might have a back end of XML documents, which you periodically process with a command-line XSLT interpreter to generate static HTML documents served up by a conventional web server. You might keep the XML sources in a database, from which they're retrieved either as XML, as transformed HTML or even as full-blown HTTP sessions. Various application servers, content managers and even XML databases provide each of these interfaces. Another variation is this: you might keep only sources on your server and, with the right combination of HTML extensions and browser, direct the browser itself to interpret the XSLT you pass. You can make each of these steps as dynamic as you like, with caching to improve performance, customization to match browser or reader characteristics and so on.
This multiplicity of applications makes vendor literature a challenge to read. We all adopt different styles of Java programming depending on whether we're working on applets, servlets, beans and so on, even though all these fit the label of Java web software. Similarly, it's important to understand clearly what kind of XSLT processing different products offer.
Neil Madden, an undergraduate at the University of Nottingham, has an XSLT system tuned for especially rapid deployment and maintenance. His scheme is organized around multisection sites, authored by teams of administrators, editors and users. He uses TclKit, an innovative open-source tool that combines database and HTTP functionality in a particularly lightweight, low-maintenance package. TclKit also knows how to interpret Tcl programs, so he wraps up tDOM with standard templates into a scriptable module. With this, he begins site-specific development:
Design an XML document structure that captures the content of the site's data.
Compose XSL stylesheets to transform data to meet each client's needs.
Repeat steps one and two for each section that needs special requirements.
Add users, sections and pages.
Scripted documents encapsulate these bundles of different kinds of data (site structure, XML sources, stylesheets) and make it easy to update and deploy a working site onto a new server or partition. Madden has plans to offer not just web-based editing but also a richer, quicker GUI interface. Tcl's uniformity and scriptability make this dual porting through either web service or local GUI practical.
Well-defined module boundaries are essential to the system. Designers maintain stylesheets, administrators manage privileges and editors assign sections without collision. With all of the functionality implemented as tiny scripts that glue together reliable components, it's easy to layer on new functionality. Madden's medium-term ambitions include a Wiki collaborative bulletin board, and XSP and FOP modules for generation of high-quality presentation output. Madden proudly compares his system to Cocoon, the well-known, Apache-based, Java-coded XML publishing framework. At a fraction of the cost in lines of code, his system bests Cocoon's performance by a wide margin.
Even further along in production use of tDOM XSLT is George J. Schlitz of MediaOne. He prepares financial documents with XSLT in a mission-critical web environment. While he originally began publication with Xalan, performance requirements drove him to switch to tDOM.
The fundamental point in all this is to be on the lookout for XML-coded or XML-codable data. Chat logs, legal transcripts, printer jobs, news photographs, screen layouts, genealogical records, game states, application designs, parcel shipments, medical files and much, much more are all candidates for XML-ization. Once in that format, XSLT processing is generally the most reliable and scalable way to render the data for specific uses.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
1 hour 42 min ago
- Reply to comment | Linux Journal
2 hours 14 min ago
- All the articles you talked
4 hours 38 min ago
- All the articles you talked
4 hours 41 min ago
- All the articles you talked
4 hours 43 min ago
9 hours 7 min ago
- Keeping track of IP address
10 hours 58 min ago
- Roll your own dynamic dns
16 hours 12 min ago
- Please correct the URL for Salt Stack's web site
19 hours 23 min ago
- Android is Linux -- why no better inter-operation
21 hours 38 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?