The Integration of Facilities and IT Data at Data Centers

As I mentioned before, I am beginning to speak facilities talk better, not fluently, but enough to fool people for 10 minutes or so.

The subject of integrating facilities and IT data to monitor and control energy use at a data center is attracting a lot of attention. A talk by Kevin Malik, CIO of IO Data Center, titled “DCIM: The Future of the Data Center,” at the recent Uptime Institute Symposium, attracted a large audience with many people standing. Kevin said that IO developed their DCIM tool and applied it to more than 30 data centers. It worked fine, in spite of the many different building management systems involved.

I was always wondering how they integrated data from both sides. After talking to a guy at the IO Data Center booth and Kevin Brown of Schneider Electric, I think I finally get it.

Like a typical IT person, I still look at a data center from the viewpoint of IT but not facilities. What do IT folks do at a data center? They want to make sure that the applications necessary to conduct business run smoothly. Therefore, they need to make sure servers are running and other ICT equipment is in good working condition. So IT guys want to watch and monitor IT gear behavior. For that, SNMP and other IP-based protocols are used. But IT folks do not care or do not know how to pay attention to power consumption and other parameters, like voltage and frequency fluctuation, temperature, and power interruption, for the ICT gear.

On the other hand, facilities folks pay attention to power consumption/availability and adequate cooling and equipment for a data center. They care less about ICT gear behavior. So most of the time, DCIM with data from IT means power and temperature information about ICT gear via Intelligent Platform Management Interface (IPMI). You do not need to run an IP protocol for this interface to get what facilities folks want.

IPMI is defined by Wikipedia as:

System administrators can use IPMI messaging to monitor platform status (e.g. system temperatures, voltages, fans, power supplies and chassis intrusion); to query inventory information; to review hardware logs of out-of-range conditions; or to perform recovery procedures such as issuing requests from a remote console through the same connections e.g. system power-down and rebooting, or configuring watchdog timers. The standard also defines an alerting mechanism for the system to send a simple network management protocol (SNMP) platform event trap (PET).

So when most people talk about the integration of facilities and IT data at their data center, they are not talking about the integration of data relating to higher level stuff—like general server health, applications running on it, software utilization—that is usually a big deal for IT folks. In the SVLG DCEE conference held at the IBM Almaden Research Center last year, a session on DCIM was geared more towards IT. Cisco and IBM presented their systems to consolidate data from both IT and facilities. I am sure HP’s OpenView integrates these two types of data.

But some vendors I talked to said that two types of data can be stored in a logical database that consists of two separate physical databases, one for facilities and the other for IT data. At the moment, this seems to be a good compromise. Looks like more research and work are necessary in this area.

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, ,

No comments yet.

Leave a Reply


*