It looks like technology industry has been making some efforts in this area. DC power sounds an interesting alternative. I wonder what are the drawbacks of using 380-volt DC power instead of AC. The eWEEK article focuses on the good side only. I am sure there is bad side to it too. Aren't there reasons for preference of AC over DC in power distribution? Can't recall exact details but something to do with distribution losses.
On facility wide basis, 15% reduction in energy consumption does sound significant. I am looking forward to reading the complete report when and if available. I expect that efforts focusing on reducing energy consumption at rack level may produce more significant results.
This area is expected to be pretty Hot "literally" in near future considering projections indicating cost of powering up and cooling down exceeding cost of equipment.
Last year, Luiz Andre Barraso of Google also published an interesting article on the same topic in ACM Queue (See The Price of Performance) where he described cost trends of large IT infrastructure such as Google's with couple of interesting graphs. Some of the key points mentioned are:
- Every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power related costs are an increasing fraction of the TCO.
- The energy costs of that system today would already be more than 40 percent of the hardware costs. (The system is a x86 server worth $3,000 consuming 200 watts on average).
- If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin.