HTML: The Definitive Guide, Second Edition
Authors: Chuck Musciano and Bill Kennedy
Publisher: O'Reilly and Associates
Price: $32.95 US
Reviewer: Eric S. Raymond
Given the number of HTML books available, it takes something close to hubris to title a book HTML: The Definitive Guide. When O'Reilly sent me the manuscript of the first edition for review over a year ago, I was skeptical—but that first edition earned its title by presenting the best reference material I have ever seen on HTML. This second edition is a worthy follow-up.
The authors methodically walk you through every HTML feature in HTML 3.2, Netscape's extensions and Internet Explorer's extensions. They even cover such recondite topics as cascading style sheets. A handy reference appendix lists all the world's tags.
What is really outstanding about this book is the careful attention to HTML portability issues. Browser-specific tags and tag attributes are prominently marked. Charts like the summary of content-based tags on page 73, which tell you exactly how the tags will render under Netscape, Internet Explorer and Lynx, are alone worth the price of the book. And while non-portable constructions are carefully documented, the book is full of good advice about making your pages browser-independent.
Not only is this a definitive guide, it may be the only HTML book you'll ever need—at least, until the authors put out the next edition covering HTML 4.0.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- A Topic for Discussion - Open Source Feature-Richness?
- Dynamic DNS—an Object Lesson in Problem Solving
- Home, My Backup Data Center
- Please correct the URL for Salt Stack's web site
2 hours 1 min ago
- Android is Linux -- why no better inter-operation
4 hours 17 min ago
- Connecting Android device to desktop Linux via USB
4 hours 45 min ago
- Find new cell phone and tablet pc
5 hours 43 min ago
7 hours 12 min ago
- Automatically updating Guest Additions
8 hours 21 min ago
- I like your topic on android
9 hours 7 min ago
- This is the easiest tutorial
15 hours 43 min ago
- Ahh, the Koolaid.
21 hours 21 min ago
- git-annex assistant
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?