It would seem, that unless you are not actively involved in the current world (perhaps you are busy studying the galaxy or wondering whether that really is water on Mars), you might have heard something about going green. So, as a commuter who takes mass transit because it is easier and cheaper, image my surprise when one of our subway stations was bedecked in vinyl advertising touting that if you moved to this company’s platform, you could go green and reduce your energy consumption by more than 50%. It should be noted that this company, earlier in the year claimed you could get back close to 70% of your network bandwidth by switching to their VoIP platform, so I will take their numbers with a grain of salt (and a shot of tequila) but the issue of going green in the data center is something that caught my eye, not because it was a new trend, but because it was a trend. It would seem that going green is the current buzz word, both in and out of the IT industry. However, like Virtualization, Security or Y2K, you need to take one part myth, one part science, one part art, shake until confused and pout over the ice of shrinking IT budgets and you are left with the confusion of management as they glaze over with each sip of the vendor's concoction as they assign you the task of implementing the current trend.
OK, so maybe I am being dramatic, but when you think about it, IT has, in years without a major release from Microsoft, focused on something, usually pushed by the hardware vendors trying to move product, and the something this year seems to be going green.
The myth part of this follows along with Moore’s law. You remember Moore, he of the “…number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years.” Late last year, as I was preparing to move my data center, I had to count up the power consumption of my systems so that I could make sure there was enough juice to make them go. You would be amazed how fussy these systems can be about having enough power. In the process of computing watts consumed and BTUs generated, a rather startling fact made itself known (OK, perhaps not so startling if you are paying attention). The 1U pizza boxes, with the quad cores that seemed to radiate enough heat to warm your lunch (which they did quite nicely), ounce for ounce, generated less heat and used less power than the 6U bar fridges that had half the computing power and took up six times as much space. Of course, this does make sense. Every year, the systems improve in capacity and processing power, so why not in power consumption and BTUs generated. This is where the myth part comes into play. If you just keep current with your equipment, you are going green and do not even have to work hard to achieve it.
But that only gets you so far. Then the science kicks in. One of the more scientific improvements is not so much in the IT systems, but in better building maintenance and management. While most of us think about a data center as a huge empty room kept at a temperature just above freezing, where you can store meat and most who work there need parkas and gloves to function, the modern data center is no longer a giant freezer. Cooling in the new data center has gone from whole room to rack based where air is forced around and through the racks and up and down through the plenum rather than cooling all the empty spaces in the room. This is the next step in going green. There are other aspects to this. Efficient power management in lighting and other electronic systems; improved power cabling, making sure that power goes where it is needed and not where it is not needed; environmental changes in building design, materials and structures. These all help keep costs down and as more building material comes from recycled material, costs are reduced and increased greenness is achieved.
The art, of course, comes in the melding of all the various components that go into a data center. Budget costs will always drive the components that can be procured and there are always trade-offs. There are never enough dollars for everything we want, and never enough time to install all the little things that will help maximize our dollars spent, despite the current demands of management.
And after all, at the end of the week, after months of planning, a new trend will be reported, maybe right here in these very pages, and the cycle starts all over again. Happy Greening.
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
|Trying to Tame the Tablet||May 08, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Readers' Choice Awards
- Please correct the URL for Salt Stack's web site
4 min 57 sec ago
- Android is Linux -- why no better inter-operation
2 hours 20 min ago
- Connecting Android device to desktop Linux via USB
2 hours 48 min ago
- Find new cell phone and tablet pc
3 hours 46 min ago
5 hours 15 min ago
- Automatically updating Guest Additions
6 hours 24 min ago
- I like your topic on android
7 hours 10 min ago
- Reply to comment | Linux Journal
7 hours 31 min ago
- This is the easiest tutorial
13 hours 46 min ago
- Ahh, the Koolaid.
19 hours 24 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?