IT and Facilities Integration at Data Centers by Future Facilities

I met Sherman twice before.


Sherman Ikemoto

At that time, Future Facilities’ (FF) main focus was computational fluid dynamics (CFD), which was important then and still is today. But it was not interesting enough for me to write about it (sorry, Sherman). In 2011 FF came out with new positioning and a new set of functions, a virtual facility (with a suite of tools called 6SigmaDC), a digital replica of a real data center. The virtual facility can put together information on power, IT loads, and space, in addition to air flow, and create a mathematical model and run simulations on it without actually altering a data center.

I had an opportunity to listen to people who are using this product at the recent FF conference. As I listened to their talks and had a frank chat with Sherman, I began to think that this replica has good potential to solve a big problem of IT and facilities: disarray in managing high-power-density data centers.

This blog is a summary of my chat with Sherman and my thoughts triggered by it. FF did start its business with a focus on air flow (the term DCIM did not exist then anyway, although CFD is one of the DCIM categories). He said that earlier they were brought in by data center facilities folks to clean up the damage done by IT. The use of the word “damage” was interesting because as a former long-time IT guy, I never thought facilities people felt that way. Facilities people tailor air flow to IT needs at the beginning of IT deployment. But because the IT way is notoriously to change everything—including equipment, rack configurations, and rack layouts—often and on-the-fly, air flow customized before the changes no longer applies after the changes, and thus IT does damage to operations in the entire data center.


Hosted phone system: affordable and innovative technology business resource

After seeing this repeated again and again, Sherman and his folks realized it would be better to let IT and facilities folks work together to share air flow and other information to avoid the problem early on rather than fight with it later. Earlier in the conference, Hassan Moezzi, director of FF, said that air flow is the single most important factor in managing a data center because most data centers are cooled by air rather than liquid (such as water). By controlling air flow and optimizing its effect on cooling, most problems could be solved.

I think I knew this, but until it was put that way I did not fully appreciate it. Another thing I re-realized concerns IT and facilities integration. Since the beginning of my writing about the data center segment, many people have said that the difficulty of managing data centers is primarily IT and facilities’ differences in culture and lack of close collaboration. Some remedies were suggested, such as making both IT and facilities report to the same boss and/or letting IT be responsible for the power bill. Those are fine, but they are at too high a level. What can we actually do? Sherman and FF are advocating to create a digital replica (mathematical model) of a physical data center. The model is used to test multiple data center configurations to find the best before putting the real IT infrastructure in place. This makes sense. I have toured many newly constructed data centers. Standing in an empty floor, I often wondered how they would lay out IT equipment to manage the entire data center in an energy efficient way. They do not know in advance how the IT equipment will be laid out and how electric and mechanical systems can support it. Come to think of it, it is a scary thing.

Now my next questions. Developing a mathematical model is fine, if we are talking about new construction. Granted that many new data centers are popping up everywhere, including Silicon Valley, there are a far greater number of existing data centers. If the model cannot apply to existing ones, FF’s solution is very limited. But if it can, that means a great business opportunity. FF is often called in to find a solution for an existing data center that has extra capacity (in theory) to host more IT equipment but cannot expand further for some reason—maybe there are hot spots. This is called stranded capacity. By diagnosing the root cause, they can fix the problem by constructing a virtual facility and analyzing it.

This is great, but there is no mathematical model for existing data centers, which consist of hundreds and thousands of pieces of IT and facilities equipment. How do you collect a list of equipment and logical connections to construct a model for that? Initially, FF collected and entered information by hand, a time-consuming and error-prone process. Later, they created an interface to bring in data automatically from multiple sources, such as IT configuration databases that might be produced by someone like Asset Point with their autoscanning of IT equipment. With this interface, FF could work with a company like Nlyte.

A natural question is whether there exists a standard for a communications protocol and data format to share the data created by each DCIM tool. Unfortunately, at this point there is none, although FF uses XML as a base. Even with XML, you could still have your own data formats, although it might be easier for conversion because XML is ASCII based. In any event, FF developed their own interface and data formats, which they share with their partners, like Intel, Nlyte, Aperture, RF Code, and SynapSense. This allows assets and monitoring information into the virtual facility model.

Well, this is interesting. It would be great if FF, or whoever leads the standardization of data formats, could integrate many more DCIM tools into their virtual facility platform and accelerate the adoption of DCIM. I explored this in my previous blog.

FF is working with Intel as a development partner, and their solution interacts with Intel’s data center manager (DCM). Intel has established an interface with data coming from servers and is working with FF to merge their interface with it. Since the DCIM market is in its infancy, there are no standards. Cooling and electrical solution providers like Schneider and Liebert-Emerson and others have their own interface and data formats. I know Intel is big and that more than 80% of all the servers in data centers run Intel chips. Is Intel powerful enough to force a standard to unite DCIM tools? After all, we need to convince facilities types to agree on a standard, and they are not used to standards.

Sherman thinks that the most important thing for really optimizing the efficiency of data centers is to understand data from servers, which is the real culprit, not cooling or electrical systems. “If Intel controls such data, why not?” he continued. It would be IT, not facilities, that would set the standard, he said.

This argument is convincing, but my skeptical nature forces me to wonder if the facilities type would go for a standard. In the BMS market, vendors were forced to support an interface with the Web because the Web revolution was so powerful that they needed to support the Web/IP protocol. We need a similar magnitude of scale to force the standardization of data formats so that each DCIM tool can share information on a single platform like FF’s. I do not have any idea what that would be. Would it be a power crunch, I wonder?

How about adoption? FF has roughly two types of customers: Web/Internet and mission critical. The former includes Intel, Facebook, Google, and Microsoft. The latter includes Bank of America, which will soon announce its adoption of FF’s solution, and JPMorgan Chase. FF is also targeting medium-size data centers, as they expect them to get the same benefits as large data center players. The company originally came from Europe, and their presence there is fine. But they have yet to penetrate the Asian market, although they have customers there for designing server boxes with their tools.

As for channels and reselling their products and services, EYP/HP might be the closest to being certified, as FF is in discussions with them.

As Chuck Rego of Intel mentioned to me, we need to cover both the monitoring and the capacity planning sides of DCIM. If somehow FF can standardize the data for DCIM and unite both sides, DCIM will make it mainstream, and many of the “damages caused by IT” may be avoided.

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , ,

No comments yet.

Leave a Reply


*