Linux Journal Accepts Nominations for 2003 Readers' Choice Awards
"Our awards programs have become very well recognized throughout the community, and this is a great opportunity for companies to make sure their products are in the running," commented VP of Sales and Marketing Carlie Fairchild. "Last year's Readers' Choice Awards received almost 6,000 votes, and we expect to have at least as many this year as well."
The 2003 Linux Journal Readers' Choice Awards will be announced in the November 2003 (#115) issue of Linux Journal. On-line voting will be open to readers from June 30 through July 25 on the Linux Journal web site: http://www.linuxjournal.com/rc2003/.
Nominations will be accepted via fax only, more information and product nomination forms are available on the Linux Journal Readers' Choice Awards web site: http://www.linuxjournal.com/rc2003/. Entry forms must be received by June 23, 2003.
About Linux JournalLinux Journal is the premier Linux magazine, dedicated to serving the Linux community and promoting the use of Linux world-wide. A monthly periodical, Linux Journal is currently celebrating its ninth year of publication. Linux Journal may be purchased at all major bookstores and newsstands and may also be ordered by calling 1-888-66-LINUX, sending e-mail to firstname.lastname@example.org or visiting http://www.linuxjournal.com/. For additional information about Linux Journal send e-mail to email@example.com.
About the PublisherSSC Publications is an established leader in the Linux, Open Source and UNIX fields, publishing best-selling books, reference cards and e-zines in these fields since 1983. SSC is headquartered in Seattle, Washington and has been operating since 1968. Visit SSC on the web at http://www.ssc.com/.
Media Relations Contact:
Rebecca Cassity, Marketing ManagerSpecialized Systems Consultants, Inc. (SSC)PO Box 55549, Seattle, WA, 98155Phone: +1 206-297-8653 / Fax: +1 firstname.lastname@example.org
Rebecca Cassity is the Director of Sales for Linux Journal
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- New Products
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
41 min 12 sec ago
- This has already been done
42 min 12 sec ago
- Reply to comment | Linux Journal
1 hour 27 min ago
- Welcome to 1998
2 hours 15 min ago
- notifier shortcomings
2 hours 39 min ago
4 hours 16 min ago
- Android User
4 hours 18 min ago
- Reply to comment | Linux Journal
6 hours 11 min ago
9 hours 40 sec ago
- This is a good post. This
14 hours 13 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?