XSLT Powers a New Wave of Web Applications
Suppose you're responsible for a web site of tens of thousands of pages. You maintain those pages in an organization-specific XML vocabulary that strips out formatting information and HTML blemishes; your documents hold only the logical content specific to each page. Visitors need HTML, of course, but you generate that automatically, along with standard headers, frames, navigation bars, footnotes and all the other decorations we've come to expect on the Web. XSLT gives you the ability to update site style instantaneously for all the thousands of managed documents. Moreover, it partitions responsibility nicely between XML content files and XSLT stylesheets, so that different specialists can collaborate effectively.
That executive-level description masks quite a bit of implementation variability. Where and when does the XSLT transformation take place? You might have a back end of XML documents, which you periodically process with a command-line XSLT interpreter to generate static HTML documents served up by a conventional web server. You might keep the XML sources in a database, from which they're retrieved either as XML, as transformed HTML or even as full-blown HTTP sessions. Various application servers, content managers and even XML databases provide each of these interfaces. Another variation is this: you might keep only sources on your server and, with the right combination of HTML extensions and browser, direct the browser itself to interpret the XSLT you pass. You can make each of these steps as dynamic as you like, with caching to improve performance, customization to match browser or reader characteristics and so on.
This multiplicity of applications makes vendor literature a challenge to read. We all adopt different styles of Java programming depending on whether we're working on applets, servlets, beans and so on, even though all these fit the label of Java web software. Similarly, it's important to understand clearly what kind of XSLT processing different products offer.
Neil Madden, an undergraduate at the University of Nottingham, has an XSLT system tuned for especially rapid deployment and maintenance. His scheme is organized around multisection sites, authored by teams of administrators, editors and users. He uses TclKit, an innovative open-source tool that combines database and HTTP functionality in a particularly lightweight, low-maintenance package. TclKit also knows how to interpret Tcl programs, so he wraps up tDOM with standard templates into a scriptable module. With this, he begins site-specific development:
Design an XML document structure that captures the content of the site's data.
Compose XSL stylesheets to transform data to meet each client's needs.
Repeat steps one and two for each section that needs special requirements.
Add users, sections and pages.
Scripted documents encapsulate these bundles of different kinds of data (site structure, XML sources, stylesheets) and make it easy to update and deploy a working site onto a new server or partition. Madden has plans to offer not just web-based editing but also a richer, quicker GUI interface. Tcl's uniformity and scriptability make this dual porting through either web service or local GUI practical.
Well-defined module boundaries are essential to the system. Designers maintain stylesheets, administrators manage privileges and editors assign sections without collision. With all of the functionality implemented as tiny scripts that glue together reliable components, it's easy to layer on new functionality. Madden's medium-term ambitions include a Wiki collaborative bulletin board, and XSP and FOP modules for generation of high-quality presentation output. Madden proudly compares his system to Cocoon, the well-known, Apache-based, Java-coded XML publishing framework. At a fraction of the cost in lines of code, his system bests Cocoon's performance by a wide margin.
Even further along in production use of tDOM XSLT is George J. Schlitz of MediaOne. He prepares financial documents with XSLT in a mission-critical web environment. While he originally began publication with Xalan, performance requirements drove him to switch to tDOM.
The fundamental point in all this is to be on the lookout for XML-coded or XML-codable data. Chat logs, legal transcripts, printer jobs, news photographs, screen layouts, genealogical records, game states, application designs, parcel shipments, medical files and much, much more are all candidates for XML-ization. Once in that format, XSLT processing is generally the most reliable and scalable way to render the data for specific uses.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide