The US is a big country and each state has great independence. That is sometimes good and sometimes not so good. Each region is unique in many ways. Energy resources, climates, taxes, populations, laws and regulations, and awareness of environment are vastly different. Japan, where I originally came from, is in general a uniform, small country. Yes, climates are different from north to south but not as much as in the US.
The power situation in Japan has not improved dramatically. With all 50 nuclear power plants halted after the big earthquake in 2011, nearly all power is generated from thermal energy, mostly natural gas. In order to secure enough power after the earthquake, even old thermal power plants, which were dormant and in line to be demolished because of their age, were brought back to fuel power demands. Many did not have time to shut down for major overhauls, because power was urgently needed. On top of that, power by renewable energies is very limited compared to in the US.
With power costs and taxes being almost uniformly the same wherever you go, the deciding factors for site selection in Japan are ease of access, human resource availability, and market size. Tokyo far excels Osaka and Nagoya in this department. The majority of data centers are in Tokyo because of its concentration of large corporations, hence market, in spite of the lack of space and the fear of an imminent earthquake. People have been talking about having data centers elsewhere because of the earthquake scare, but very little progress has been made so far.
The site selection picture is drastically different in the US. This has been discussed in many conferences and by many people. The requirements for a data center are the same as those in Japan. Two must-haves are power and network connectivity. Other factors include tax breaks, market, proximity to your business, and human resource availability.
In the “Data Globalization: The Management of Site Selection, Incentive and Increased Sustainability” session at the Motivated to Influence Data Center Conference, four knowledgeable data center practitioners discussed the factors that determine optimal data center site selection and trends.
Even though the topic has been discussed in the past, as time goes by, some requirement may get more attention than it did before, and it makes sense to get abreast of current trends.
- KC Mares, President & CEO, MegaWatt Consulting
- Sterling Adamson, Critical Operations Manager, PG&E
- Mark Monroe, CTO & VP, DLB Associates
- Mark Thiele, EVP, Ecosystem Evangelism, Switch Supernap
The panelists as a whole have data center experience at HP, Sun, VMware, GreenGrid, Yahoo, Apple, Exodus, and PG&E and are actually involved in data center selection now.
From left: KC Mares, Sterling Adamson, Mark Monroe, Mark Thiele, and Ron Vokoun
On-premise vs. outsourced
The first question posed by Vokoun was which solution to choose among internal, colocation, and cloud data centers. Since I am not a purist, I do not distinguish between colo and cloud. The question is really own or outsource. Monroe’s reply was straightforward. You can compare the two options by throwing in a lot of metrics, but he said that monthly recurring cost is an easy metric. You are likely to know the internal cost of having your own data center and running it. The total cost of ownership (TCO) is a bit harder because it requires knowing its lifetime cost.
Other panelists voiced opinions sparked by Monroe’s answer. Power cost towers over the many other factors to consider, such as power availability, ease of access, connectivity, taxes, market, and human resource availability. But deciding solely on the basis of power cost may come back and bite you. At some later time, you may need to change the energy mix for your power, to become greener or meet regulation changes or whatever, and if the local power provider cannot address your needs, you are stuck. It is not easy to shut down a data center and move it somewhere else.
Predict the future
KC added to this discussion by saying that we need to be careful about the current power price. It usually takes up to five years for a data center to become operational, from conception, design, and construction to commission. In addition, once commissioned, a data center lasts for 20 years or more. So it is necessary to foresee the future when we consider which option to take, i.e., outsourced, on premise, public cloud, private cloud, or colo. It is easier to predict some costs, such as energy cost trends, regulatory requirements, and investments, but others, such as sales taxes, are harder to predict.
A follow-on question to this was the predictability of the price of coal and renewables. Renewables are easier to predict because of long-term contracts. But coal is harder to predict. I can give an example, such as regulation. EPA is applying pressure not to use coal.
Data center proximity and renewables
Sterling pointed out another factor for site selection: neighboring facilities, such as airports and railroad tracks. An accident of any magnitude may disrupt the operation of data centers. He continued with PG&E’s use of renewables, especially hydropower generation and solar PV. But he also pointed out the downside of renewables. Those renewable sources need constant management and care, due to high wind, cloudy weather, flying birds, and so on. So his conclusion was to look at the whole picture and decide which resources to employ.
PG&E has an Emergency Management Advancement Program (EMAP), which links many entities with fiber networks. Their fiber networks are the biggest owned by a private party in California. There are many fault lines in California, and those network cables go through many of them, so redundancy and diversity are very important. Repairing such cables is not only costly but also takes days. Thiele gave an example of how important connectivity was from his own experience. He was working at HP when the power failed. An Idaho data center was in charge of the corporate email system, and he was at the Santa Clara site. When the power failure was over, both sites came back on, but edge equipment, like routers, was not powered. Even though the email system was working at both the Idaho site and the Santa Clara site, with all the lights on, he could not send or receive email.
PG&E, like other utility companies, has a rebate program to reward those who retrocommission and/or improve energy efficiency. The money is set aside like an escrow account. When you think you want to improve your data center’s efficiency, you need to get PG&E involved from the design phase. Sterling said it was an easy thing to do. But some panelists disagreed, saying that when people are building a data center, the rebate does not come to their mind in the first place. Besides, you need to show the design change by presenting at least two designs (good and bad). Sterling defended the program by saying that they knew about the criticism and had made it much easier to apply and get reimbursed for their investment.
Another point made was a design with no chiller. Since the design does not use a chiller at all, it is hard to justify the improvement. Another panelist said that smaller data center operators may not be interested in pursuing this, because the rebate may be too small for their efforts.
There are a few other considerations. What is available now as part of benefits like sales tax breaks may not last forever. A new administration in the region may take it away. Economic, political, and social stability are necessary considerations for international locations, even in stable countries like the US and Canada.
Others include the price of power and taxes such as sales, income, and property. There are 11 states that do not assess equipment and furniture. Those states are Delaware, Illinois, Iowa, Kansas, Minnesota, New Jersey, New York, North Dakota, Ohio, Pennsylvania, and South Dakota. In addition, income taxes can make a big difference.
First there were many data centers scattered around randomly due to acquisition and so forth. Then data center consolidation happened, and only a few core data centers existed. Now people want to have smaller data centers close by for ease of access and rapid response. So now data center locations are planned and built or acquired to suit needs. They may be called portfolio data centers, and in a sense they increase resiliency in case of one data center’s failure.
Thiele agrees that it will be very important to have redundancy and resilience, like dynamic balancing or moving loads across data centers, in five to seven years. But current applications running in one data center are pretty static in a sense, and they do not need to be rehosted in another data center in case of capacity overflow. KC added that as software defined networks (SDN) allow virtualization of networks, it will become easier to load balance between data centers. Incidentally, SDN has been used mostly intra-datacenter but not inter-datacenter. A group of Japanese companies is proposing a project named O3, the inter-datacenter type SDN.