Using Caldera OpenLinux, Special Edition
Author: Allan Smart, Erik Ratcliffe, Tim Bird, David Bandel
Price: $39.99 US
Reviewer Ben Crowder
If you've ever wanted a thick, massive reference to Linux—particularly for OpenLinux—then Using Caldera OpenLinux is the book for you. This 1200-page book covers almost everything you would need to know about Linux. Granted, in only 1200 pages there are some things that aren't covered, but this book does an excellent job on the things it does go over.
The first couple of chapters introduce Linux and OpenLinux and explain how to install OpenLinux (the book comes with an OpenLinux 2.2 CD). There are distribution comparisons, an explanation of the Linux Standard Base project and a complete guide to LIZARD (the Caldera installation program).
The second section, “Using OpenLinux”, introduces KDE, shows you how to navigate the desktop and tweak KDE to your tastes, explains KDM and how to get it set up, briefly covers the hordes of applications that come with KDE and has a chapter on KOffice. In case you were wondering, there is virtually nothing on GNOME—but that makes sense, since Caldera's default desktop is KDE (and when you already have 1200 pages, you don't want yet another chapter). This is a marvelous introduction to KDE, one I would suggest for any KDE user.
“OpenLinux System Administration”, the third section, explains the Linux file-system structure, users, groups and permissions, DOSEMU, the boot process (inittab and friends), how to customize your shell environment (with a few pages on shell programming), printing, RPMs and other types of package management and how to build your own RPMs. Other topics include building your own kernel and the kernel modules, partitioning your hard drive, mounting and unmounting file systems and LILO. This section is for the most part true for all Linux distributions, not just OpenLinux. For example, take the chapters on recompiling your kernel; these apply to any Linux box.
Section four, “Networking with OpenLinux”, is the meat of the book. Three hundred pages are devoted to networking and rightly so, considering Linux is basically a networking operating system. There are chapters on TCP/IP fundamentals, network administration, IP aliasing, PPP, e-mail, BIND and DNS, FTP, Apache, IP masquerading and firewalling, TCP wrappers, NFS, NetWare, Samba and other Windows connectivity tools. If you want to learn Linux networking, you should definitely read this section. Even if you aren't using OpenLinux (like section three, this section applies to most Linux distributions), you'll find the information in these chapters highly relevant and useful.
There are a hundred pages on X: setting it up, the beautiful XF86Config file, customizing X and X resources. The final section is for miscellaneous topics, with two chapters on encryption and multimedia. The appendices include a list of commonly used commands, a hardware compatibility list, Linux module information and other Linux resources.
So is this book a must-buy? Yes, yes, yes, ten times over. I was very impressed with it, even though I'm not running OpenLinux (most of what I read, I was able to use on my Red Hat machines). This is one of the best Linux books—in fact, make that computer books—I've read in a long time. It's clear and concise, and (perhaps most importantly) humorous at appropriate points. It's geared more toward intermediate and advanced users, but beginners can learn much from it as well.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
3 hours 50 min ago
- Please correct the URL for Salt Stack's web site
7 hours 1 min ago
- Android is Linux -- why no better inter-operation
9 hours 16 min ago
- Connecting Android device to desktop Linux via USB
9 hours 45 min ago
- Find new cell phone and tablet pc
10 hours 43 min ago
12 hours 12 min ago
- Automatically updating Guest Additions
13 hours 20 min ago
- I like your topic on android
14 hours 7 min ago
- This is the easiest tutorial
20 hours 43 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?