Coming from IT, I find a data center a fascinating place where facilities and IT coexist, but they can use closer communication with each other. When I started to cover the data center segment some four years ago, people I met at data center conferences and meetings asked me if I was on the mechanical or the electrical side of a data center. Straight from IT, I couldn’t make much sense of the question. “IT,” my answer at the time, puzzled people as if they were talking to someone who had just landed from Mars. My unscientific data tells me that 70–80% of people who attend data center conferences and meetings are facilities folks but not IT folks. So chances were that I was talking to the 70–80% rather than the 20–30%.
So when a CIO from a data center talks, I do not want to miss the chance to hear what he has to say. In May, Kevin Malik, CIO of IO , gave a presentation on IO’s DCIM system, IO.OS, at the Uptime Symposium.
It was a very interesting talk, and I went to their booth to see a demo. I was very much interested in what they were doing in terms of facilities and IT integration. See, coming from IT, I still look at a data center from the IT point of view, and when it comes to monitoring and control, it is still system management. Leading companies like HP (OpenView), IBM (Tivoli), CA (Unicenter), and BMC (Patrol) provide system management tools. System management tools deal with higher stacks of the computing system, such as server health (online status), applications, utilization ratio, and response time, by using IP as the dominant protocol. They do not usually report infrastructure data like power consumption, temperature, and fan speed. On the other hand, DCIM deals with the infrastructure data for both facilities and IT (via Intelligent Platform Management Interface). From the computing system’s view, DCIM deals with the lower layers of the stack. The following figure depicts this.
I wanted to talk to Kevin because of his practical knowledge of DCIM and his background as a software person.
The following is an edited version of my conversation with Kevin.
Kevin is CIO of IO and general manager of IO labs. As CIO he does what other CIOs do in non–data center companies, i.e., back-office support, such as HR and financial departments. That is fine, but his work as general manager of the IO labs is more interesting. He provides software solutions to what IO does. More precisely, he, who is in charge of software, works closely with William Slessman, cofounder and CTO, who is responsible for the hardware side of the data center. Many experts advise having a single management structure to oversee both IT and facilities for optimized data center operations. So before I even asked about their effort to integrate IT and facilities to operate a data center more efficiently, I knew they were serious about it. Their first goal is to control power consumption because the rate of data center power consumption has been skyrocketing and should be controlled. The best way to do that is to combine hardware and software to provide a solution. After all, the ultimate goal of having a data center is to run IT equipment for business, and a data center needs to work with IT harmoniously to optimize its operation for the best IT and business results.
As for the operation side of the business, Kevin may not influence site selection per se, but he makes sure that networking is available at the selected sites and that those sites are connected with major network providers of the likes of Level 3 and Abovenet. Also, he makes sure that each and every modular component shipped is hooked up with the proper software to be appropriately installed for accurate monitoring and control.
Their DCIM tool is called IO.OS and is available to all of their colocation customers. The customers can plug into IO’s infrastructures, and all the power usage and other relevant information, including temperature and PDUs, becomes available through a web interface. Customers have the option to alter their environment, such as the temperature, if they think they can raise it, leading to relaxed cooling requirements (thus, less money). This is done on a customer-by-customer basis. However, at this point, as a service provider, IO does not have a say about which IT equipment the customers purchase or how they configure their IT gear at the colocated space. IO wants to work with customers to optimize the IT equipment to work seamlessly with the facilities infrastructure. For example, by monitoring power requirements for two servers running side by side, they know whether the utilization of both servers is low although a large amount of power is allocated, and they can consolidate these two servers into one to relax the power requirement.
I touched on the issue of integration between DCIM and system management tools above. IO is working on the integration of the two types of tools at this time. IO has a very robust web services layer to integrate products. Additionally, IO.OS supports integration with VMware allowing seamless communication with the data center and the virtualized stack. Any change to a data center, such as opening a door (which may increase the temperature) is considered a stimulus and captured by software. It then takes the necessary action, such as moving some virtual machines from one server to another, to best balance the power usage resulting from the stimulus. Kevin mentioned that what is not being considered now in the optimization formula is applications. System management tools deal with applications. If applications are taken into consideration, the whole data center from top to bottom would be considered, leading to even more optimized operations. So Kevin agreed that some integration between the two types of tools would help to make a data center operate in an optimized way.
Finally, I was curious about what they do in regard to using renewable energies for their operation. I was expecting an answer like solar and/or wind. But Kevin’s answer was somewhat unexpected. Pumped hydro storage is one form of energy storage. During the night, when the cost of power is the lowest, water is pumped up to higher ground. And during the day, when the power cost is higher, water is released to generate power (hydropower). In IO’s case, ice is used in place of pumped hydro storage for cooling data centers. (Web Hosting reports some details along with a picture of such ice .) During the night, when power is cheap, they produce a bunch of ice, and during the day they use them to cool data centers. This may not be an application of renewable energies, but the idea is interesting and certainly shifts power usage away from peak times.