The topic “platforms” is almost as broad as “computers” because anything upon which something else is dependent can be considered a platform. With this month's feature articles we cover both hardware and software.
Frequently, when people think platform they think processor architecture. And for years, as far as Linux was concerned, that meant x86, despite the fact that some courageous individuals began very early the work of porting Linux to other platforms. Some of us can remember as far back as 1996 in the precivilized days before 20GB hard drives when Linux was supported on only few platforms other than x86, such as the IA32, the Amiga and Atari. Now every major processor (and a whole lot of minor ones) have been ported to Linux.
In his article, “The Trials and Tribulations of LinuxPPC 2000 Q4”, Paul Barry discusses his experiences with the highly touted PPC processor, the one believed by many to have the best chance of taking the “tel” out of “Wintel” (see page 60). While many distros continue to support only Intel, a growing number are offering support for the PPC. Besides the usuals—Yellow Dog, MkLinux and LinuxPPC—SuSE, Mandrake and Debian also have distros for the PPC. In our August 2000 issue we ran an article on installing LinuxPPC, and Barry's article, almost a year later, provides a good measure for how far it's come and the distance still to go.
In our second feature article, “PostgreSQL Performance Tuning”, Bruce Momjian discusses what can be done on the hardware end to improve the performance of tasks involving the PostgreSQL database (see page 66). Momjian provides an illustration of memory types and uses and how to make the most of PostgreSQL by modifying cache size and sort size and spreading disk access across drives.
Also, see Stephanie Black's book review (page 76) on Momjian's PostgreSQL: Introduction and Concepts. Momjian's book includes tips on maximizing performance through optimizing the queries sent to the database. Between the article and the book, you should be able to get your PostgreSQL running at its maximum potential.
—Richard Vernon, Editor in Chief
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- RSS Feeds
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Weechat, Irssi's Little Brother
- Senior Perl Developer
- Technical Support Rep
- UX Designer
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?