At a recent Association of Information Technology Professionals data center panel discussion, a seasoned group of IT admins discussed meeting customer power demands, with the consensus that demand is insatiable. Even as budgets seesaw from abundant to sparse, the demand curve never flattens, instead climbing skyward. The Jevons Paradox, the nineteenth century axiom of “the more we produce, the more we consume,” looms large in IT for the foreseeable future. Or as my colleagues say, “If you build it, they will fill it.” The first panel warning was that virtualization is not a cure-all for reducing data center power consumption. Of course, there’s a clear advantage to high-density computing --cramming many virtual machines (VMs) into a single server -- but CPU demands for power and cooling still grow with each VM. In many cases, power and cooling costs shift from distributing power across lots of small servers to boosting power to cool red-hot VM-hosting systems.
Switch to variable-speed fans
Recent research found that power consumption drops 30% for every 10% reduction in fan speed. As the name implies, these fans only consume power when needed, only running at the speed required, based on fairly sophisticated thermostatic measures. Since these fans slow down over long periods of time at low CPU utilization, they quickly decrease powerusage with each non-turning blade. And don't stop with servers; check cooling features of UPS devices and power supplies of various appliances on the same power grid, plus any other hot spots that may have a fan spinning for a while.
Raise the air temperature
According to data center infrastructure suppliers, modern servers can perform well up to 77 degrees Fahrenheit. Yet many data centers have cooled servers down to mid-60 degrees F for years. By raising the ambient air temperature a few degrees, there can be an immediate drop in power usage by the cooling system with no server performance impact. There’s no overhead or investment needed, although close monitoring and a solid pilot program would be advisable to avoid unpleasant surprises. Granted, a slightly warmer server room can be a disconcerting change. For example, the dress code may have to be adjusted to allow for lighter clothes in warmer conditions. Use bigger, slower drives
Of course, this should not be done for high-demand transactional processes, such as financial databases or critical 24-hour systems. But by delegating a percentage of mostly unused files to a lower tier of storage, big, low-energy demand drives can replace small, fast units. In turn, less drives burn less energy, creating less heat. This can be an expensive undertaking, but as most shops build out more storage every quarter, they should see it as a worthwhile investment.Use hosted services
Although moving IT workloads to a cloud or colocation provider externalizes the carbon consumption off to the host site, many will concede that big vendors are experts at squeezing the most out of a kilowatt. By using hosted services, you’ll be able to focus on delivering better value at a lower cost for your customers.
The risks of data center power consumption projects
IT organizations need to acknowledge the inherent risks in energy-efficiency projects. As one power company director put it, in a high-density, highly efficient environment, the data center can go thermal in seconds. Several recent high-profile outages started as a partial interruption, but cascaded to bringing down the entire facility. The catalyst – overheating that spread from rack to rack until all systems shut down for self protection.
The final warning: Spell out any risks before implementing changes to the data center and make sure to get executive support before pursuing any of these tactics of reducing data center power consumption.
By Mark Holt, Contributor, SearchDataCenter.com