More Energy Efficiency by IT at Data Centers, Panel Disucssion Proposal

I wrote about my first idea for a panel discussion in my previous blog, and here’s my second. When we analyze the power consumption ratio among servers, storage equipment, and networking, we find that servers consume far more power than storage and networking equipment combined. That is why the initial Energy Star effort at energy efficiency in data centers was for servers. EPA’s first version of the server energy efficiency rating system is completed, and the agency is working on the second version now.


There are several ways to curb server power consumption in data centers:

  1. Turn off unused or unnecessary servers.
  2. Consolidate servers via virtualization.
  3. Implement power management.
  4. Refresh hardware for better energy efficiency.

The second and fourth options above have been given a lot of attention. Is there anything more we could do to reduce consumption?


After we got a handle on server energy efficiency, we started looking at storage devices, which consume a sizable amount of power, even though it’s less than servers use. This is reflected by EPA’s recent effort to establish an Energy Star rating system for storage. There are a few energy efficiency technologies for storage devices:

  1. deduplication
  2. MAID
  3. thin provisioning

Anything else?

Networking (revised and added)

We have not seen any effort on a rating system for networking energy efficiency yet by EPA. But IEEE has started working on one. Some of the energy efficiency in this space are:

  1. IEEE EEE (Energy Efficient Ethernet): This is a specification for a more-efficient Ethernet.
  2. WOL (Wake on LAN): A network message turns a computer on when it’s needed and off when it isn’t.
  3. Network I/O Virtualization with Unified Fabric: Fibre Channel can be carried on top of Ethernet.
  4. Network Controller I/O Offload: Offloads networking tasks from central CPUs to offloading engines.

When we analyze power consumption at data centers, depending upon the data we quote, we find that cooling consumes 30% to 60% of the power. So we tend to curb the power consumed by cooling. But when we step back and think hard, we realize that the IT equipment is generating intense heat. There is a study that indicates that a 1 W IT saving equates to a 2.84 W facilities saving.

  – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –  
Ten Cooling Solutions to Support High-Density Server Deployment
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

So another idea for a panel is to gather people who are in the server, storage, and networking space and discuss how the IT space can be made more energy efficient. One panelist could be a vendor or someone who installs data center equipment. Another panelist could be a designer or planner of IT infrastructure who might be able to quantify the saving. If the quantification is hard for the planner, we could bring in a researcher.

I might ask the panelists:

  • Can you run your equipment under much higher temperature? If so, how high can it be?
  • Is there any other way to curb (server, storage, or networking) power consumption further, independent of the facilities environment?
  • Can you save more if you coordinate your server with storage or networking equipment?
  • How can you curb power consumption if you coordinate your IT equipment with the facilities environment?
  • How about DC interfaces to your IT equipment to support DC power distribution?

What is your idea?

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.


No comments yet.

Leave a Reply