Game Changer? Beyond Realizing Hybrid Clouds—Part 1

When cloud computing was first introduced, I did not expect that it would develop to such a degree that the IT world would be greatly changed. First public cloud and then private cloud were introduced. Then hybrid cloud became the center of discussion.

Some people project 2013 will be the year of the cloud, and hybrid clouds are talked of as one of the trends for the year to come. See here, here, here, and many other places.

As I said before, much of hybrid cloud is just talk and not reality, and there have been several showstoppers before now.

Some of the many factors making it hard to implement hybrid clouds are mainly technical:

Technical problems

Virtual machine (VM) file format

  1. Public cloud: Amazon Web Services was the first to implement a public cloud, and AWS is now the de facto standard for public cloud. It uses its own proprietary file format (Amazon Machine Image, a.k.a AMI) running virtual machines on the Xen hypervisor. Their file format is not the same as the original Xen VM format. So even if you are running Xen hypervisor for your cloud, you cannot enjoy interoperability with AWS without converting your VM’s file format. For example, Citrix virtualization environment is based on Xen, but its file format is virtual hard disk (VHD), which is also the file format for Microsoft’s virtual machine.
  2. Private cloud: In the enterprise market (private cloud), VMware’s VM file format (VMDK) is the de facto standard.
  3. Hybrid cloud is an attempt to use both private and public clouds to process IT demands by optimizing suitable in-house and outsourced IT infrastructures as needed. So when we want to move VMs back and forth between public and private clouds, we need translations each time we move them across the cloud boundary. It may not be very hard to do so, because there are some translation tools readily available from vendors like Amazon and VMware (vmkfstools). It may be straightforward to move VMs that are not in execution, but VMs in execution are generally hard to move with their execution state intact. See the next.

Physical movement of VMs

  • If we want to exploit public and private clouds for an application in execution, that execution instance may be transported between two or more clouds to find the most suitable execution environment. One big issue is the distance between clouds. VMware’s vMotion allows you to transport your VM up to something like 100 km (80 miles) but no farther. With this physical restriction, what you can do with hybrid cloud may be limited by the distance between clouds.

Various support environment

  • Cloud is not just virtualization but needs a comprehensive environment, such as management and support, including tools and security considerations. Each cloud tends to come with its own environment and idiosyncrasies, so what you can do easily in one cloud may not be as easy in another cloud. This would make managing a hybrid cloud cumbersome.

To date, most discussions on hybrid have been at a very abstract level and not at all concrete. People have talked about what we could do with hybrid cloud without referring to its concrete implementation. Recently, I came across yet another brand-new cloud company that claims to have solved the aforementioned problems. Greg Ness recently sent me email with a press release and wanted to show what CloudVelocity, his new company, is doing in the area of hybrid cloud.

I am by no means an expert in hybrid cloud computing or any kind of cloud computing, for that matter, but let me try to review how hybrid computing is implemented with their technologies. To support hybrid cloud, VMs need to move back and forth between private and public clouds. How can we implement such a move? Because an execution space is not shared between a public and a private cloud, we cannot literally move a VM across the clouds. What we do is to make a copy of a VM executing at one cloud and transport its execution status to a cloned VM at another cloud. Then we can disable the original VM and enable the cloned one. If a VM is not in execution, it is not that hard. But if it is in execution, it is much harder.

If both private and public clouds are implemented with the same technologies and the distance is less than, say, 100 km, the same VM could be transported with a utility like vMotion. But in most cases, two cloud environments are not the same (see the technical problems described above), and the distance could be greater. Also, you can move only virtualized applications but not traditionally maintained applications, because you cannot assume all the applications have been virtualized into a VM format.

We need to have carbon copies of VMs and non-VM versions of applications (that need to be virtualized) on the other side. That means you need to have carbon copies of your applications running on a public cloud. This sounds like a disaster recovery (DR) system.

Disaster recovery/fail-over system

In such a system, you duplicate the applications that are running at the primary location and operate them with options at the secondary location. These options include active-active and active-passive configurations. Active-active means that the machines (and thus applications) are live at both the primary and the secondary locations at the same time, with data being copied from the primary to the secondary sites. In this scenario, when the primary location cannot operate any longer for any reason, the secondary location can take over seamlessly. The active-passive configuration may not guarantee complete synchronization, because the passive one in the secondary location does not run until the primary location can no longer support applications.

In any event, if we duplicate the whole thing for the secondary site, as in the case of DR in an active-active fashion, the duplicated copies are always in the secondary site with dedicated servers. This situation is the farthest from cloud computing in spirit, especially for public clouds.

What we need is a solution like this:

  • Copies on the other side made only when needed (on-demand).
  • Noninteroperability problems overcome:
  • Resolve VM file format and other incompatibilities among major cloud systems, such as AWS, Rackspace, Microsoft, and OpenShift.
  • Handle physical vs. virtual applications in an IaaS cloud environment.

Now back to CloudVelocity. I visited Greg Ness and Rajeev Chawla, CEO, at their headquarters in Santa Clara. They claim to have implemented a solution to solve the problems discussed above.

From left: Rajeev Chawla (CEO) and Greg Ness (VP Marketing). See here for their bios.

They have developed a comprehensive system for implementing hybrid cloud that they call One Hybrid Cloud Platform (OHCP), which is depicted in the following picture. Applications move across the cloud boundary in five steps:

  1. Host discovery—Inventory your private cloud (data center), which consists of all the pertinent IT hardware and software.
  2. Blueprinting—Create a database of how the discovered components are put together.
  3. Cloud provisioning—Duplicate and create VMs on the target cloud (translating VMs and virtualizing physical applications if necessary).
  4. Synchronization—Synchronize VMs between the two clouds.
  5. Service initiation—Let the duplicated VMs take over and disable the original VMs.


CloudVelocity’s comprehensive One Hybrid Cloud Platform.

This sounds easy. How do they do this? That will be covered in Part 2.

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , , ,

No comments yet.

Leave a Reply