World Wide Web Journal
Publisher: O'Reilly & Associates
US$24.95 per issue, US$75.00 per year
Reviewer: Danny Yee
Issue 1 of the World Wide Web Journal contained fifty-nine papers, fifty-seven from the Fourth International World Wide Web Conference (held in Boston in December 1995) and two from regional conferences. The range of topics covered is immense. To list just a few (in no particular order): why the GIF and JPEG formats aren't good enough for really high quality graphics; low-level security in Java; the results from the 3rd WWW Survey; an analysis of Metacrawler use; caching systems; a filtering system to provide restricted access to the Web; a PGP/CCI system for Web security; the Millicent system for financial transactions involving small sums; smart tokens; and better support for real-time video and audio. There are also papers on the use of the Web in education, on cooperative authoring tools, on Web interfaces to database and software systems, and a cornucopia of other things.
Issue 2 was a disappointment. It consisted solely of standards documents: Requests For Comment (RFCs) numbers 1630 (URIs), 1808 (Relative URLs), 1736 (IRL recommendations), 1866 (HTML 2.0), 1867 (Form-Based Upload), and unallocated (HTML Tables); Internet drafts on HTTP 1.0, PEP HTTP/1.1, and HTML Internationalization; and W3C drafts on PNG and Cascading Style Sheets. Since all of these documents are freely and easily available on-line and several have already been superseded, this is really of limited value. (Nicely formatted bound versions of standards documents are useful, but only for the standards that have some sort of permanence.)
Though shorter, issues 3 and 4 strike a better balance between background material, standards and technical papers. As background material, issue three contains an interview with Tim Berners-Lee and descriptions of other World Wide Web Consortium staff. The technical papers are mostly about Web demographics and “geography”: the Nielsen/CommerceNet, GVU, and White House surveys; systems for statistical analysis of traffic; visualisation of Web connectivity and traffic; and the implementation of national Web cache systems in the United Kingdom and New Zealand. Issue 4 is mostly devoted to HTTP: it contains technical specifications for and informal descriptions of HTTP 1.1, as well as papers on state management (cookies), digest authentication, and future directions for HTTP. There are also papers on PICS, PNG, distributed objects, and distributed authoring.
Though few assume much technical background, the papers in World Wide Web Journal are mostly technical in focus: they are not for everyone who runs a Web server or authors HTML. However, for those concerned with the future of Web technology—because they are directly involved in protocol or system development, because they need to prepare for future applications or out of simple curiosity—the journal is a good way of keeping up with the most important developments. As a quarterly journal, it fills a niche between books and information sources on the Web itself.
World Wide Web Journal can be sampled on the Web at http://www.w3.org/pub/WWW/Journal/.
Danny Yee receives a complimentary subscription to World Wide Web Journal but has no stake—financial or otherwise—in its success. He can be reached at firstname.lastname@example.org.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide