Linux Journal Press Announces the Release of "The Linux Cookbook"
The Linux Cookbook's step-by-step format makes it easy for readers to find what they need fast. In over 1,500 "recipes", author Michael Stutz shows readers how to accomplish everyday tasks using all the free, Open Source software that comes with Linux. Readers learn how to:
Connect to the Internet, manage email and chat online
Produce professional-quality typeset documents and create posters and large banners
Schedule automated reminders for appointments
Browse the Web, archive entire Web sites, and write HTML with powerful Linux tools
Send and receive faxes, prepare print files, and read and write data across platforms
Use spelling and grammar checkers, word counters, and powerful dictionary tools
Scan images, extract PhotoCD graphics, and capture screen shots
Record and play sound, apply sound effects, make MP3 files, and run audio CDs
Available in bookstores or from Linux Journal Press (http://store.linuxjournal.com), The Linux Cookbook is the all-in-one introductory guide and desktop reference for using Linux.
About the AuthorAs a technology correspondent with Wired News, Michael Stutz was one of the first journalists to write about Linux and the Open Source movement in the mainstream press. He has contributed to the GNU Project and the Linux Documentation Project, and has created the Design Science License (DSL), a generalized "copyleft" license designed to fit any work. Applicable to The Linux Cookbook, the DSL permits unrestricted redistribution and modification, provided that all copies and derivatives retain the same permissions. Find more information at the author's Web site (http://www.dsl.org).
About Linux Journal PressLinux Journal Press publishes books on cutting-edge Linux topics that help to advance the acceptance and usability of Linux. An imprint of No Starch Press (http://www.nostarch.com), Linux Journal Press titles are developed in partnership with Linux Journal (http://www.linuxjournal.com).
Contact Amanda Staab at +1 415-863-9900 or firstname.lastname@example.org to schedule an interview or request a review copy.
Media Relations Contacts
Amanda Staab, PublicistNo Starch Press555 De Haro Street, Ste. 250, San Francisco, CA 94107Phone: +1 415-863-9900Fax: +1 email@example.com
Rebecca Cassity, Marketing ManagerSpecialized Systems Consultants, Inc. (SSC)PO Box 55549, Seattle, WA, 98155Phone: +1 206-297-8653Fax: +1 firstname.lastname@example.org
Rebecca Cassity is the Director of Sales for Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?