Game Changer? Beyond Realizing Hybrid Clouds—Part 2

This continues the discussion of CloudVelocity’s hybrid cloud technology. In this blog, I would like to talk about what’s under the hood.

Some technical details

As a former technologist, I wanted to open the hood and find out more about the underlying technologies. For this, Anand Iyengar, CloudVelocity’s founder and CTO, gave me a chalk talk.

Anand Iyengar

Because this is not a white paper detailing the technology, I only describe it at my layman’s level. However, it is such an intriguing technology that I’m accepting Anand’s offer for further discussion and will write more about it in the future.

Anand elaborated on the details, but I made a simpler diagram to fit the space. It is not that much different from the picture above.

Virtual machines (VMs) move between a typical enterprise private cloud (mostly VMware-based) and a public cloud (typically Amazon AWS). (Source: CloudVelocity)


Let’s take a quick look at the architecture:

Private cloud

  • We first look at your own data center or colocation facility (private cloud). In the modern software application system, an application does not run on a single server. Instead, the running of an application spans multiple physical and virtual machines. So we call it a multisystem application. The configuration may differ according to usage and design. Typically, it consists of load balancers, web servers, application servers, and sometimes a cluster of other servers.
  • This is illustrated in the figure above. To save space, I drew only two machines, S1 and S2. The multisystem application typically uses a database, file systems mounted from a closed-box NFS server system (NFS1), and services from an LDAP server (LDAP). Everything in the public cloud is a copy of what is in the private cloud, including NFS1. Note that NFS provides files locally but not over the cloud boundary. Moreover, in the private cloud there is a server, such as an LDAP, that one may not want copied to the public cloud but kept in the private cloud for security reasons.
  • There are virtual appliances (CloudVelocity Nexus Site Manager for the private cloud and CloudVelocity Cloud Manager for the public cloud) that together keep the cloud site images synchronized with the most recent changes to systems in the private cloud. CloudVelocity uses the term appliance to emphasize its dedicated function. CloudVelocity Nexus may run on a physical server, while CloudVelocity Cloud Manager runs as a virtual machine.
  • Let’s further assume that S1 (in the VMDK file format) is virtualized, but neither S2 nor DB1 is virtualized.

Public cloud

  • Everything in the public cloud is a mirror image of what is in the private cloud. The public cloud is populated by copying what is in the private cloud. An initial copy is made for each system, and updates are sent afterwards.

 A. System S1, which is virtualized needs to be copied to a pubic cloud. S1 is copied   via the link to the public cloud, unless there is a copy left over from a previous need, in which case only the differences are copied. It is converted to AMI automatically. In the case of S2, it must be copied via the link to the public cloud. Like S1, if there is not a copy left over from a previous need, it gets virtualized to run on an AMI file format.

B. System DB1 and NFS1, which are physical servers, go through the same process. They also are automatically virtualized to run on AWS/AMI.

  • The two clouds are linked by the Internet or a dedicated connection via SSL.
  • When any of the systems are no longer necessary, they can be disabled and deleted, or retained for future use. The copy may be retained to minimize copying time in the future.

Some high-level description continues regarding how those components work together. The actual workings are much more complex, but I have simplified them for this presentation.


  • CloudVelocity Nexus inventories all the pertinent information regarding computing power in the private cloud, including applications and supporting servers, such as file systems and databases. The configuration information is stored in a proprietary file format.
  • Inventory information is passed to the CloudVelocity Cloud Manager in the public cloud. This appliance is virtualized to run on AWS (in the AMI file format) all the time. Storage and computing time for this appliance are charged per AWS pricing. The size of the appliance is negligible at several hundred kilobytes, and it does not cost much. Once Cloud Manager receives the configuration information, storage volumes for each component get allocated for each system and populated, without running it. This reduces activation time for the public cloud counterparts. EC2 charges are heavier for computing than for storage. The design is a good compromise for reducing copying time and saving on computing charges on EC2.
  • Starting the systems in the public cloud typically takes three to five minutes, which is the time required to boot up a VM in the AWS cloud. They are started in parallel.
  • The systems may be disabled when not needed in the public cloud. The user may expect another need for the systems sometime soon and keep a copy around, or delete it to save the storage charge by the AWS system.

Application areas:

      1. Cloud fail-over: If the private cloud goes down for any reason but the operation cannot be halted, a full, earlier copy of the application systems may be started in the public cloud to take over the operation. This is called cloud fail-over and can be used for disaster recovery and for implementing features like follow-the-sun and follow-the-moon.
      2. Development and testing sandboxes: More than one full copy of the application can be started simultaneously in the public cloud, while the application is still running in the private cloud. These copies are fully sandboxed and can be used for development or testing.
      3. Complete move: For datacenter space constraints and other reasons, the systems in the private cloud may be cloned to the public cloud and those in the private cloud, disabled.
      4. Cloudbursting: This allows extending computing power in the private cloud by enabling and cloning computing power in the public cloud, if a load surge takes place. This can be accomplished without losing data integrity in the private cloud, because two appliances can tunnel update requests back to the local site. Any changes made on the public cloud are constantly sent back to the private cloud for data consolidation, so when the load surge subsides and the copies in the public cloud are taken down, data integrity is maintained.

Patent-pending technology

Anand said that two technologies in One Hybrid Cloud Platform (OHCP) are unique, and CloudVelocity is applying for a patent for each.

The first has to do with synchronizing two data stores via two appliances that contain the inventory of computing equipment in both clouds. I will not go into detail, but according to Anand, replicating and maintaining synchronization between the two requires some work. During switchover time between the primary and the secondary copy of a VM by vMotion, pages dirtied on the primary copy are constantly sent to the secondary copy for synchronization. This requires fast (about 5 to 10 ms) communication between the primary and the secondary, but it allows a game running on one server to run continuously on another server after the move. The OHCP sends all the changes once in the form of a file and that makes it possible to send over a slower connection like the Internet with encryption (SSL). As for the moving of a running game, OHCP does not support such a feature.

The second is concerned with letting the duplicated copies of VMs in the public cloud have access over the connection to databases like LDAP in the private cloud. As noted before, because of security concerns, some servers and databases may not be duplicated in the public cloud. So VMs in the public cloud need to have access to them in the private cloud.

OHCP vs. vMotion

After discussion with Anand, I came to understand that vMotion and OHCP address different problems, but may overlap in some functionality. Both technologies move systems in execution from one cloud to another. But there is more to it. I summarized the differences in the following table.



Cloud requirements

Works between heterogeneous physical or virtual systems and clouds

Both clouds need to run with VMware

Unit of synchronization


Main memory page and block storage

Bring-up time

3–5 minutes (VM booting time on AWS)

A few seconds

Connection requirements

Not particularly (can be Internet) with SSL

Latency < 5 ms, or distance < 200 km; fast, dedicated connection preferred

Application areas

Cloud fail-over, development/testing, migration, cloudbursting

Applications keen on quick switchover; within the same data center or relatively short distance

Looking at the table above, it appears that the two technologies are not competing but can be complementary to each other. I will dig into them more in my future blogs.

By the way, I can try out their system free of charge. But wait! I am not ready. I do not have a reasonable-size private cloud myself, much less use AWS. I probably need to consult with some of my friends who are involved in Silicon Valley Cloud Center.

(Continued to part 3, which will discuss energy efficiency by cloud computing and what it means to have a hybrid cloud.)

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , , ,

Leave a Reply