An IT Guy’s Take on ASHRAE’s Recent Guidelines for Data Center ICT Equipment

ASHARAE is a professional organization and according to their website:

ASHRAE, founded in 1894, is a building technology society with more than 50,000 members worldwide. The Society and its members focus on building systems, energy efficiency, indoor air quality and sustainability within the industry. Through research, standards writing, publishing and continuing education, ASHRAE shapes tomorrow’s built environment today.

It has a big impact on data center operations. In the recent conference given by DatacenterDynamics in San Francisco, I had a chance to chair a track that included thermal guidelines from ASHRAE. Judging from the past attendance at ASHRAE guideline sessions, I was expecting a large turnout, and the two sessions on the subject were packed.

The first presentation was by Don Beaty, ASHRAE Fellow and president of DLA Associates.

Don Beaty

He was the cofounder and first chair of ASHRAE Technical Committee TC 9.9 (Mission Critical Facilities, Technology Spaces and Electronic Equipment). TC 9.9 is very relevant to data center operations because it sets guidelines for operating data centers, including facilities and ICT equipment. When I attend a conference session, I usually record it for accuracy and memory’s sake. But it was hard to do so as a chair. So I am recalling from memory and some of the details are a little bit fuzzy.

One thing Don kept emphasizing during the talk was that it is the temperature of inlet airflow to a server that matters for data center cooling but not the temperature in the room. In the past, CRAC units on the data center floor checked the temperature of returned air and used it to approximate the temperature of inlet airflow to a server. Obviously, it usually did not reflect the actual inlet airflow temperature. If cooling is via a raised floor, the inlet airflow temperature varies widely, depending upon the proximity of CRAC units. So it is imperative to measure and monitor the temperature at the inlet of each server/rack. At some point, cooling via a raised floor may not function well. For example, a rack that consumes 10 kW may not be cooled effectively with raised floor cooling. Furthermore, even though it is desirable to have uniform power consumption and heat dissipation from each rack, because of ICT equipment configuration requirements and other constraints it is not always possible to do so. Don presented a guideline for the inlet temperature for servers titled, 2011 Thermal Environments – Expanded Data Center Classes and Usage Guidance , and I extracted the table and a graph from page 8 and page 9 of the document, respectively, for reference purposes.

 

A psychrometric chart describes temperature and humidity and is used to set a proper range for the combination of the two in a data center. This chart shows A1 through A4 ranges, along with the recommended envelope.

The current server can operate fine (with server vendor warranty) with the A2 guideline shown above. A2 sets the high temperature at 35°C (95°F). But new guidelines can expand the acceptable ranges to 40°C (104°F) by A3 and 45°C (113°F) by A4. If you allowed this widely expanded range, almost any data center in the world could take advantage of free cooling, such as airside economizer. Incidentally, Christian Belady of Microsoft has said that developing a server that tolerates higher temperatures might raise the production cost (thus the purchase price), but millions of cooling dollars could be saved with several thousands dollars more for this type of IT equipment.

So what’s holding up the production of servers with A3 and A4 guidelines? Next up were Dr. Tahir Cader, Distinguished Technologist, Industry Standard Servers (ISS), and David Chetham-Strode, Worldwide Product Manager, both of Hewlett-Packard. They shared very interesting results. Tahir is on the board of directors for The Green Grid, a member of ASHRAE TC 9.9, and a liaison between ASHRAE and The Green Grid.

 

 

Dr. Tahir Cader

Again, I do not have their presentations and unfortunately cannot refer to specific data. They experimented with power consumption at various geographical locations, using the A1 through A4 guidelines. According to their findings, you may not need to employ A3 or A4, depending upon your location. In many cases, there was little or no difference between A3 and A4. Sometimes there is some savings between A2 and A3, but it depends upon the geographical location.

When we consider the temperature in a data center, we also need to consider it for humans. Even though the primary purpose of the data center is to host ICT equipment and the temperature could be raised up to 45°C at the server inlet, doing so could also raise the temperature at other locations. Anything above 30°C may not be very comfortable for people to work in for a long time.

It was relatively easy in the past to pick your server, using some well-defined data, such as CPU speed, number of cores in the CPU, memory size, disk capacity, and number of networking ports and their speed. Even if you have data centers at locations throughout the world, you may buy only a few server types and get a good discount from particular vendors for all of your data centers. But another factor may be added when you refresh your servers next time, which is the analysis of the inlet airflow temperature to a server vs. power consumption. If you are big and sophisticated like HP, you could run your own analysis to decide which server (that supports A1 through A4) to use. But this analysis seems to be fairly complex and it may not be that easy. Being a chair, I needed to control the Q&A session but had a chance to ask a simple question. Can a server vendor like HP provide a service to pick the right type of servers for your geography? The answer was yes. That is good to know.

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , , , , , , ,

No comments yet.

Leave a Reply


*