Game Changer? Beyond Realizing Hybrid Clouds—Part 3

This is a continuation of the previous blog on hybrid clouds. In part 1 and part 2, I discussed CloudVelocity and its technologies for implementing a hybrid cloud. Now that we know a hybrid cloud can be successfully implemented, what does that mean to us? How does it change the IT world? By the way, the following discussion assumes that a perfect hybrid cloud can be implemented. The following rant is not based solely on the current or future technologies of CloudVelocity.

What does it mean?

How does the IT scene change with the implementation of hybrid cloud computing? First let’s consider private clouds only. In the following, I will use an enterprise data center and its private cloud interchangeably for the ease of discussion, although not all data centers have been converted to private clouds yet. Some company may have several data centers (and therefore private clouds) in the US, or even worldwide, across multiple time zones. So even before talking about hybrid, using this technology we can combine those physical data centers into one single logical private cloud. A logical cloud consists of physical private clouds (data centers) and may be recognized as one entity.

Logical private cloud

With a logical private cloud, using some technologies from CloudVelocity, we can move applications that may consist of physical machines (PMs: not virtualized) and virtual machines (VMs) anywhere and anytime we choose. In the following figure, we can pass PMs and VMs back and forth seamlessly between our home cloud and any other private clouds of our company. Although it shows only a subset of interactions below, we can potentially move PMs and VMs in any way that makes sense by some predetermined criteria. It may that one PM or VM is passed to another cloud and then to the third one and so on. It would become pretty complex to manage your Pms and VMs under such a new paradigm.

PMs and VMs move around only among private clouds owned by the same organization. A set of such private clouds may be considered as one logical private cloud.

This means we can finally implement several things discussed in part 1, including:

Follow the sun

In a given workday, access to software applications and utilities running on servers and other IT equipment—and therefore clouds—fluctuates. Access starts to grow as people start their day’s activities in the morning, it hits a peak, then subsides towards the evening. Access is lowest during the night. So you might want to move your PMs and VMs to other time zones where the sun still shines and more loads need to be processed. We can expect a better response time when loads and processing units are close to each other.

Follow the moon

In many countries, power is cheaper during off-hours (normally nights, hence follow the moon). Sending your Pms and VMs to such time zones may reduce your operation cost. Additionally, even within the US, power cost can fluctuate hourly if a variable power pricing model is implemented and applied to data centers. By shifting your VMs to a data center whose region gets the lowest power cost, you may save on running costs.


Just as we load-balance among servers at a data center, we may want to send loads to several different private clouds. In this way, when one data center gets very busy, such loads can be passed to other data centers to share the burden. How you move PMs and VMs should be determined by predefined metrics to optimize your operations for a few factors, such as operating cost, response time, and throughputs. Each organization has its own goal for its operation, and the metrics should be tailored to accommodate it.

Cloud bursting may be related to load-balancing, although it is not the same. When a load increases in a private cloud, we may want to move all or part of it to a public cloud for on-demand processing; this is known as cloud bursting. PMs and VMs that are processing the load can be moved to a public cloud for continuous processing. When the load subsides, PMs and VMs on the public cloud can be disabled. There has been a lot of talk about cloud bursting, but now it can become a reality. We need a good automated system to move PMs and VMs, and to enable and disable them as needed. A good policy is a must-have for this as well.

Fail-over/disaster recovery

The San Francisco Bay Area will have a major earthquake some day for sure, and when it happens, much of the existing infrastructure, including data centers, will be unusable. If we have a way of duplicating what we are running in our primary data centers at a secondary site far enough away (such as the Sacramento area, a little more than 80 miles from the Bay Area) and transferring execution state information intact to the distant site, processing could proceed without interruption.

Super logical private cloud

With this technology, we do not have to consider the boundary between private and public clouds either. So the logical private cloud can include public clouds, becoming a super logical private cloud, or what I call a supercloud.

A green oval depicts a private cloud, and a light-blue one represents a public cloud.

This configuration would make managing PMs, VMs and clouds much more complex. We can move our PMs and VMs between private clouds, between private and public clouds, and among public clouds. We will no longer be restricted to a move between one cloud and another cloud (a one-to-one move) but can implement one-to-many and many-to-many as well. Then it will become necessary to develop a system that allows automation. As we involve many private and public clouds of various implementations, we will not be able to easily track how to optimize such moves. For that, we will probably need a policy based on predefined metrics. Cost may be the number one factor. But at the same time, we want to maximize response time—and the performance of developers scattered around the globe.

Also, note that many superclouds may share the same private and public clouds. This means that loads at each private and public cloud could fluctuate over time. So depending upon how busy each cloud is, we may want to dynamically alter how we form a super logical private cloud for optimization.

By the way, when a supercloud is developed and deployed, will we call it a supercloud or simply a cloud? Those IT folks who will follow us in the future may take it for granted and consider it a normal IT deployment and execution environment. Throughout IT history, when some technology or method becomes transparent as part of an overall system, that is when we say that that technology really has matured.

Who uses hybrid clouds and benefits from them?

I can think of three parties, although there may be more.

Enterprise end-users

Enterprises that have their own private clouds can extend them to public clouds to produce hybrid clouds to exploit the things I mentioned above.

Data center providers

If you are a colo provider, you can sell extra services at your center to realize hybrid clouds for your clients. There are different levels of providers. Some may simply rent a space, while others provide both equipment and services. Some may provide both private and public clouds at the same data center. For them, this is a perfect tool to increase their revenue.

Third-party consultants/service companies

If a colo provider does not want to provide any service other than space, those guys with the hybrid cloud technology can help end-users implement hybrid clouds.

Energy consideration

Finally, my blog always ends with a question about what the subject means to energy efficiency. Although inconclusive, there has been some discussion about whether cloud computing is more energy efficient than its predecessors. I think it depends upon whose view you take. If you are a user, you pass some or all of your computing needs, along with support staff, software, hardware, power, cooling, water, and other things, to your cloud provider on an on-demand basis. Since you can reduce your investment on these, it is certainly energy efficient for you. It may or may not be for your provider. If the provider has very little utilization of their facilities, they may not be profitable or energy efficient at all. You may still have to have a large staff, a large space, dedicated IT and facilities equipment, facilities support such as cooling, and so on. That cannot be very energy efficient.

When a hybrid cloud becomes a supercloud and our energy becomes more scarce, we may need to look at energy consumption and energy efficiency at the supercloud level without distinguishing private or public clouds, which may sound silly at this point. It is because the US seems to be doing fine for the foreseeable future with shale gas and oil, but who knows what may happen next?

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , , ,

No comments yet.

Leave a Reply