It would seem, that unless you are not actively involved in the current world (perhaps you are busy studying the galaxy or wondering whether that really is water on Mars), you might have heard something about going green. So, as a commuter who takes mass transit because it is easier and cheaper, image my surprise when one of our subway stations was bedecked in vinyl advertising touting that if you moved to this company’s platform, you could go green and reduce your energy consumption by more than 50%. It should be noted that this company, earlier in the year claimed you could get back close to 70% of your network bandwidth by switching to their VoIP platform, so I will take their numbers with a grain of salt (and a shot of tequila) but the issue of going green in the data center is something that caught my eye, not because it was a new trend, but because it was a trend. It would seem that going green is the current buzz word, both in and out of the IT industry. However, like Virtualization, Security or Y2K, you need to take one part myth, one part science, one part art, shake until confused and pout over the ice of shrinking IT budgets and you are left with the confusion of management as they glaze over with each sip of the vendor's concoction as they assign you the task of implementing the current trend.
OK, so maybe I am being dramatic, but when you think about it, IT has, in years without a major release from Microsoft, focused on something, usually pushed by the hardware vendors trying to move product, and the something this year seems to be going green.
The myth part of this follows along with Moore’s law. You remember Moore, he of the “…number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years.” Late last year, as I was preparing to move my data center, I had to count up the power consumption of my systems so that I could make sure there was enough juice to make them go. You would be amazed how fussy these systems can be about having enough power. In the process of computing watts consumed and BTUs generated, a rather startling fact made itself known (OK, perhaps not so startling if you are paying attention). The 1U pizza boxes, with the quad cores that seemed to radiate enough heat to warm your lunch (which they did quite nicely), ounce for ounce, generated less heat and used less power than the 6U bar fridges that had half the computing power and took up six times as much space. Of course, this does make sense. Every year, the systems improve in capacity and processing power, so why not in power consumption and BTUs generated. This is where the myth part comes into play. If you just keep current with your equipment, you are going green and do not even have to work hard to achieve it.
But that only gets you so far. Then the science kicks in. One of the more scientific improvements is not so much in the IT systems, but in better building maintenance and management. While most of us think about a data center as a huge empty room kept at a temperature just above freezing, where you can store meat and most who work there need parkas and gloves to function, the modern data center is no longer a giant freezer. Cooling in the new data center has gone from whole room to rack based where air is forced around and through the racks and up and down through the plenum rather than cooling all the empty spaces in the room. This is the next step in going green. There are other aspects to this. Efficient power management in lighting and other electronic systems; improved power cabling, making sure that power goes where it is needed and not where it is not needed; environmental changes in building design, materials and structures. These all help keep costs down and as more building material comes from recycled material, costs are reduced and increased greenness is achieved.
The art, of course, comes in the melding of all the various components that go into a data center. Budget costs will always drive the components that can be procured and there are always trade-offs. There are never enough dollars for everything we want, and never enough time to install all the little things that will help maximize our dollars spent, despite the current demands of management.
And after all, at the end of the week, after months of planning, a new trend will be reported, maybe right here in these very pages, and the cycle starts all over again. Happy Greening.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide