Go Green, Save Green with Linux
The two most significant recent innovations in Linux regarding power management are tickless idle and virtualization. The various Linux distribution makers deserve credit for supporting these innovations, integrating them into their distributions and pushing forward initiatives like Lesswatts.org.
The idea behind tickless idle is that Linux, starting with kernel 2.6.21 for 32-bit and 2.6.23 for 64-bit machines, keeps track of time in a completely new way in order to take advantage of low-power states in modern processors. The strategy involves keeping the processor in its lowest power state for as long as possible, interrupting that state only when necessary. For instance, on an Intel Core 2 Duo processor, the power states, or C states, vary between 1.2 and 35 Watts—a significant difference. Before kernel 2.6.21, Linux pulled the processor out of the lower C state with a timer tick to inform the processor of the need to perform housekeeping tasks. This tick, occurring every few milliseconds, functionally reduced the usefulness of the lower-power states. Without the tick, Linux now chills out and conserves power until the next timer event is scheduled to occur. Multisecond idle periods now are possible.
The power savings from tickless idle can have positive benefits in any type of machine—from longer battery life on brawny notebooks to significantly lower electricity bills for home users and data centers.
Although Intel, through the Lesswatts.org Project, is more public about exploiting the tickless kernel and publicizing its power management tools, representatives at AMD assured me that their less-publicized initiatives and partnerships in the Linux community are just as or more significant than Intel's. Margaret Lewis, AMD Director of Commercial Solutions and Software Strategy, asserted that the tickless-kernel features are fully supported on both AMD's 32-bit and 64-bit processors. Furthermore, Brent Kerby, Product Manager for AMD Opteron, noted that AMD's PowerNOW!, Cool'n'Quiet and CoolCore technologies, including the dynamic adjustment of individual processor-core frequencies (and not just in pairs), all function well and automatically under Linux and contribute greatly to power savings. Lewis added, “These technologies give you a lot more power management control and are cumulatively perhaps more important than the tickless kernel.” AMD also emphasized its green efforts in other areas, such as the Green Grid, a consortium of companies working together to address environmental issues holistically throughout the data center, addressing hardware, software, building design, storage, cooling and more.
Linus Torvalds has stated that work on the tickless kernel is mostly done and, thus, can take advantage of low-power states in processors; however, much remains to be done to maximize its effect. Although Linux gladly would remain dormant, other superfluous, busybody processes from various applications keep waking it needlessly. To solve this problem, Intel's Arjan van de Ven created PowerTOP, a tool that finds culprits in the kernel and user space that are bothering the processor needlessly and reports the energy wasted by those activities. PowerTOP also reports on the time spent in each power state.
Making more efficient use of existing computing resources through virtualization, such as consolidating multiple virtual servers onto fewer physical machines, has been a major trend in the Linux space. Little do we realize we are saving a great deal of juice in the process. Thus, not only does one reduce server sprawl and the expense of purchasing and maintaining more machines, but also electrical power utilization is improved by approximately 10–20 Watts per idle virtual machine, according to AMD. Additionally, as Jon 'maddog' Hall says, “Utilizing fewer systems and sharing the load is goodness.”
The power savings from virtualization on Linux has been enhanced further by the arrival of tickless idle. The existence of ticks in each virtual machine would otherwise put multiple extra loads on the virtualization platform and greatly reduce efficiency and the number of VMs per machine. For instance, if you have 30 VMs on one machine, with each one creating hundreds of ticks per second, a significant load is created before any real work is done.
Beyond virtualization itself, a number of vendors are exploring ways to manage their virtualization strategies to streamline their data-center operations and reduce power usage further. One example is Cassatt Corporation's Active Power Management Technology, which has released a platform-agnostic product to turn off servers safely when they are not needed or idle. Rather than leaving machines automatically running round the clock or relying on manual decision making, administrators can set priorities and policies to mandate how, where and when to power down idle servers, as well as power them back up. The net result is better management of both virtual and physical infrastructure. Interesting for us Linux-lovers, Active Power Management is easy to install and nondisruptive, as it relies on internal power controllers found inside most servers rather than on installation of software on managed servers.
Scalent V/OE offers another approach, namely dynamic server repurposing. V/OE allows administrators to shift their data centers between different configurations or go from dead bare metal to live, running, connected servers in just a few minutes and without physical intervention. Scalent's Director of Marketing, Alana Achterkirchen, pointed out that Pacific Gas & Electric (PG&E), California's largest electric utility, offers rebates to companies that deploy IT virtualization projects that result in the removal of computing equipment. The incentive, says PG&E, “is based on the amount of energy saved, predicted through a calculation model” and ranges from $150–$300 per server. Way to go, California!
James Gray is Products Editor for Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide