Ultimate Linux Box 2005
dbench with 100 simulated clients:
%dbench 100 Throughput 1234.57 MB/sec (NB=1543.21 MB/sec 12345.7 MBit/sec)
Bonnie++ 1.03—a more accurate disk benchmark:
Sequential output by character: 58,577Kb/s, 98% CPU
Sequential output by block: 281,032Kb/s, 50% CPU
Sequential output, rewrite: 52,603Kb/s, 18% CPU
Sequential input by character: 34,717Kb/s, 58% CPU
Sequential input by block: 90,097Kb/s, 11% CPU
Random seeks: 257.5/s
Sequential create: 5,924 files/s
Random create: 6,056 files/s
Postmark benchmark—Postmark simulates the operations of a busy mail server. For 20,000 base files and 100,000 transactions, we obtained the following results.
46 seconds total
40 seconds of transactions (2,500/s)
70,128 created (1,524/s); Creation alone: 20,000 files (5,000/s); Mixed with transactions: 50,128 files (1,253/s)
49,656 read (1,241/s)
50,199 appended (1,254/s)
70,128 deleted (1,524/s)
Deletion alone: 20,256 files (10,128/s); mixed with transactions: 49,872 files (1,246/s)
303.46MB read (6.60MB/s)
436.18MB written (9.48MB/s)
Kernel compile: 50s
Resources for this article: /article/8330.
Justin Thiessen is a Linux Engineer at Penguin Computing. As head of this year's Ultimate Linux Box Project, he was responsible for system design, construction and testing, and was involved in component selection. When not busy with the Ultimate Linux Box, he works on new product development and improving Linux support for Penguin hardware by contributing to the lm_sensors Project.
Matt Fulvio is a freelance industrial and architectural designer in the Bay Area. He can be found teaching mathematics at the San Francisco Institute of Architecture or at www.mattfulvio.com.
Philip Pokorny is the Director of Engineering for Penguin Computing. He worked with the power supply vendor and machine shop to get the power supply modified for water cooling. When he wasn't doing that, he was standing around watching and asking silly questions like a typical pointy-haired-boss.
Trevor Sherard, the craftsman of the case for the ULB, is a San Francisco Bay area freelance sculptor and woodworker. He can be contacted at www.woodentemple.com.
Don Marti is editor in chief of Linux Journal and wrote the text of the article.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Parsing an RSS News Feed with a Bash Script
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide