In Part 1, I discussed power availability and pricing as they relate to data center operations. My conclusions were that we can only control (reduce) power demands because we cannot control the power supply. We reduce power loads when the supply is insufficient and/or when more favorable power prices are available in other data centers. We can reduce demands by power shedding, power shifting, and/or power migration.
With the introduction of “software-defined data center” (SDDC) and “software-defined network” (SDN), “software-defined something” seems to be gaining the popularity of “something as a service.” So what else can be defined at data centers? Server, storage, and networking equipment have progressed at a different pace in virtualization. See Yevgeniy Sverdlik’s article on page 52. In my previous blog, I have shown that cooling can be virtualized as well.
What is left to consider is power. Can we have “software-defined power”? I hit Google for this term, expecting to find a Wikipedia entry. Instead, I found a ton of references for Clemens Pfeiffer of Power Assure, who coined the term. OK, I needed to talk to Clemens to find out what exactly “software-defined power” is.
Software-Defined Power according to Power Assure
The following summary of my chat with Clemens should explain software-defined power at the 30,000-foot altitude. For those who want to get closer to the ground, read some of his articles on the subject, like this one.
What are the major pieces in a data center? They are IT resources (server, storage, and networking equipment) and facilities resources (cooling and power equipment). Which of these pieces can be easily defined with software? Software control is possible when a physical entity can be abstracted with reasonable ease. Via virtualization, IT resources can be abstracted, and corresponding virtual entities can be created, moved, and deleted. Although cooling cannot be virtualized in the same way as computing resources, it can be abstracted by turning fans on or off, controlling their speed, and adjusting for chillier temperatures. See my previous blog for more detail.
In other words, those four elements (server, storage, networking, and cooling) can be abstracted and therefore can be software controlled, or software defined.
Can We Define Power with Software?
Power is a unique element, explained Clemens. He drew the following figure, also shown on his whiteboard in the picture above. The very reason a data center exists is to run applications to support business needs. Keeping applications running reliably should be the first priority. Applications depend on the four elements above for reliable operation. In turn, the four elements cannot function without power. The following figure captures this concept very well.
Then power is the most important element of a data center. If cooling can be virtualized, can power be as well? Clemens says not necessarily so.
Software-defined power (SDP) is not defined in the same way as IT equipment or even cooling resources. After discussing it with Clemens and reviewing his materials, I understand SDP this way: SDP is for controlling loads at a data center and migrating them as needed to other data centers by considering power availability and power prices. This complex decision and process is performed with software control, hence software-defined power.
Let’s apply my test for virtualization or software definability discussed in Part 1 to power. That is, if something is software definable, it can be easily created, deleted, and moved dynamically. Note that we cannot do anything to power directly. We cannot create, delete, or move power directly on demand. This is quite different from IT and cooling equipment virtualization, where we can instantiate each element directly. In other words, their physical resources exist but no virtual substance exists before activation. A virtualized image becomes alive only when it is activated.
On the other hand, power by itself is already a virtual entity with a finite capacity. It always exists, is allocated as electrical energy, but does not produce anything like a virtual image of a server until some equipment is turned on. When that happens, a portion of the power is used to drive that equipment. So if we equate power creation and consumption, the creation test makes sense. That is, an appropriate amount of power instance is detached (created) from the pool and used to drive the equipment. That power instance remains active as long as that equipment is running. The increase in quantity also works as we turn on more equipment.
By the same token, the deletion test works as well. When some equipment is turned off, the power instance required to drive that equipment gets deleted and absorbed back into the pool. So the decrease in quantity also works. The move test may not work very well, though. As long as we confine the discussion to the same feeder/circuit area, it may hold. But if we are talking about two different sections served by two different feeders/circuits, it does not work. Let’s consider Section A and Section B. When we are running out of power in Section B, can we send it more power from Section A, even when it has surplus power? The answer is no, because a certain amount of capacity has been preallocated to Section B and no extra power beyond that can be allocated.
But if we extend this idea to inter–data centers, the move test makes sense. Within the same data center, we cannot necessarily move power from one section to another. But if we move loads from one data center to another, power consumption is reduced at the source data center and goes up at the destination data center. This has the same effect. Power was moved from the source data center (less power consumption) to the destination data center (more power consumption).
How Is SDP Implemented?
What Power Assure is advocating is to control power demands at a data center. Clemens stressed that power control is really controlling demands. That is the only way to control power. Demand Response (DR) is a method for controlling loads. Power utility companies send a signal to their consumers to curtail power consumption when they see the demand getting close to the limit on available supply.
I have not heard much about applying DR to data centers. A report on DR and data centers describes three methods: load shedding, load shifting, and load migration. Load shedding is simply turning off some equipment to shed power consumption, while load shifting moves the use of some equipment outside a designated time period, like peak time. Both are useful ways to curtail power consumption and are relatively simple to implement, even manually. Load migration moves loads from one data center to another. This is the key point of what Power Assure is advocating, as shown in the following figure.
Load migration from one data center to another to exploit power availability and/or pricing.
Load migration may take place when power supply cannot meet demand and/or when the power price is very high, such as during peak time and on peak-price days. In the former case, migration may be triggered by DR signals from the utility. In the latter case, it would be very helpful to have real-time access to power pricing information where your data centers are. This requires complex decisions and processing. Power Assure has developed a system for software-defined power (PA SDP) to support this with software. It consists of a dashboard, automation, and access to power pricing.
Load migration may be very difficult to implement because it assumes several conditions in advance. It requires:
- basic infrastructure
- interoperable mechanisms
- access to power pricing information
The assumption for the basic infrastructure is that the operator has data centers spread across geographical areas and that each location has some mechanism akin to disaster recovery. They can exploit the infrastructure for disaster recovery because they are already networked to send necessary information and data to back up the primary data center. Major data centers in general have a disaster recovery program, and this assumption makes sense.
It is ideal to have a compatible environment among the data centers for load migration, but the technologies, data formats, protocols, and operational methods used at each data center tend to be very different from each other, even when they are owned by the same operator. PA SDP solves this problem by providing a method to absorb incompatibilities. The systems and technologies it supports are shown in the following figure.
Load migration consists of many complex, tedious steps and requires some kind of automation. If there are any incompatible environments between the two data centers, PA SDP bridges and absorbs any differences and smoothes out migration. PA SDP’s dashboard gives you an overview of the task.
Access to Power Pricing
Power pricing varies from one region to another and from one utility to another. It also fluctuates by time of day. To determine the least expensive way to power your data centers, it is vital to have access to nationwide or worldwide power pricing information. PA SDP incorporates such access.
The two parts of this blog discussed power issues at data centers and one solution for them, software-defined power. They only touched on them at a very high level. I plan to write about them in more detail in upcoming blogs.