What’s Going on in the Data Center Market?

This is a summary of my attendance at the recent DatacenterDynamics San Francisco conference. The sessions I attended do not necessarily reflect the real marketplace, but they give you some idea of what it’s like.

California Cap-and-Trade Impacts on the Data Center Market

The federal cap-and-trade bill died in 2010 in the Senate. But California passed its own and put it into practice as of January 1, 2013. Mark Wenzel, Climate Change Adviser of the California Environmental Protection Agency, gave an overview of the program.

Mark Wenzel

He covered many aspects of the program, but one thing was noteworthy. It only applies to any entity that produces more than 25,000 metric tons of CO2 per year. That corresponds roughly to a diesel power generator running on 2.5 million gallons of diesel fuel per year. No data center in California consumes that much, so this does not apply to data centers in California. But it is not over yet. Some work is under way for energy-intensive, trade-exposed industries at CPUC, and there may be a separate category for data centers in this program.

A panel following Mark’s session discussed the impacts of this on data centers in California. In addition to Mark, the panelists were Zahl Limbuwala, CEO, Romonet; Kurt Salloux, CEO, Global Energy Innovations; and Nicola Peil-Moelter, Director of Environmental Sustainability, Akamai Technologies.

Although many subjects were discussed, two things stuck with me. The cap-and-trade program does not have a direct impact on data centers in California but it does have an indirect one: an increase of a few cents in the price of power. The other thing is that this program alone would not be a reason for a data center to leave California, but latency could be. Nicola added that Akamai will stay in California because of their customers in California.


Professor Jonathan Koomey of Stanford University has been researching energy issues as they relate to IT. In his talk, he presented the necessity of modeling a data center operation by monitoring airflow and temperatures to minimize lost capacity. Capacity is lost when some section of the white space cannot be used for IT equipment because cooling capacity is lacking. By simulating IT equipment at several locations, a data center could minimize lost capacity, if not eliminate it completely.

As it became clear during the Q&A session, his claim is that modeling is for short-term rather than long-term planning. In other words, he was not advocating using a model to design and construct a data center but to make a decision about how to place new IT equipment.

Jonathan Koomey

Dynamic IT Power

Two things caught my attention in the presentation by Donald Mitchell of Schneider Electric.

 Proper rack power and cooling for VMs

One was that Schneider Electric teams up with Microsoft to manage virtual machines. Schneider uses its StruxureWare to monitor power and cooling for each rack and alert the operator to any problem. If there is a problem, such as lost cooling, Microsoft’s virtual machine (VM) manager moves VMs from a faulty rack to one with proper power and cooling.

Actual power consumed by data center equipment

The other thing is a database they are putting together called the data center genome project. It contains IT equipment information consisting of system type, make/model, protocols, capacity/dimensions, power consumption and delivery, and airflow. This should help to prepare for the power and cooling needs of IT equipment.

Data Center Trends

Mike Salvador of Belden presented data center trends that are summarized in the following. I agree with everything.

Source: Michael Salvador of Belden

But I may add a few more to the list, such as DCIM, metrics, and IT utilization. Those may be included in other trends he listed above, though.


Data center infrastructure management (DCIM) tools are becoming the norm rather than a novelty. Although DCIM covers many aspects of data center operations, such as capacity planning, a few speakers at the conference used DCIM to mean monitoring, measuring, and asset management.


Power usage effectiveness (PUE) has become the standard metric for gauging the energy efficiency of a data center. PUE’s problems have been discussed in many places. One problem is that it does not consider IT energy efficiency. Some alternatives have been proposed, such as CADE and DCeP, but none of them have been as well received as PUE.

IT energy efficiency

PUE considers the power consumed by IT vs. all the power used at a data center. But IT energy efficiency has not been considered, mainly because it is much more difficult to deal with.

Software-Defined Data Center

A panel on the software-defined data center was interesting. Ambrose McNevin of DatacenterDynamics moderated the panel of Mark Monroe of DLB Associates and David Gauthier of Microsoft.


From left: Ambrose McNevin, Mark Monroe, and David Gauthier

“Software-defined XXX” is becoming very popular, as in “software-defined network” and “software-defined data center.” Simply put, you can define your data center infrastructure with software without regard to the actual physical entities. Both panelists agreed that in some way a software-defined data center (SDDC) is a data center operating system (OS). Before the OS was introduced, we needed to manage memory, process, I/O, and users and perform other cumbersome tasks to run our applications. All these burdensome tasks are taken care of by the OS automatically.

David also mentioned that DCIM tools are more like subroutines called by the SDDC to feed information from a target data center. It feeds asset information via asset management function and environmental information, such as temperature and airflow capacity at each rack. If new IT equipment is introduced, the SDDC may find an optimal location to place and provision it by plugging in its DNA database and an operational model (see the data center genome project and Koomey’s modeling sections above), and adjust cooling and power allocation by moving VMs around. If something fails or is about to fail, it detects that and takes appropriate action to keep operations going without disruption.

How does this relate to cloud computing? Is the SDDC equivalent to or a base for cloud computing? At this stage, the SDDC seems to be considered only for one single data center. But by extending the concept, multiple data centers could be defined by software to function as a single virtual data center.

As Yevgeniy Sverdlik wrote in his article (page 52), virtualization for servers is well under way with storage and network virtualization following. In order to fully realize the SDDC, we need to virtualize cooling and power. Can we do it? I will cover this issue in future blogs.

Some Comments

It is always good to attend a conference to find out what’s happing in the marketplace, in addition to meeting with new and old friends. As I said above, I wanted to see more discussion of DCIM, metrics, and IT energy efficiency. Also, there was no particular discussion about how to integrate facilities and IT functions for a seamless operation, a topic that was covered in a past DatacenterDynamics conference.

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of artificial intelligence, such as machine/deep learning, Big Data, IoT and cloud computing.

, , , , , , ,

Leave a Reply