What CIOs Need to Know about Cloud Computing

It is not too difficult to understand what public cloud is and why it is useful to have it. You outsource all the necessary computing and storage needs to somewhere else. On top of that, you pay only for what you use. And you use it as much as you want when you have a large load, and you use less when your load subsides.

When I first heard the term private cloud, I did not get it. You do not outsource computing or storage resources but use your own. Even if you do not use some server or storage, you still keep that server and storage in your building, and that costs you money. Something is missing here. Fundamentally, public and private are different beasts, although they may use the same technologies. In a way, with the emergence of private cloud and in contrast with public cloud, the notion of cloud computing has become more crystal clear.

Let’s look at this from a different point of view. When you look back at the progression of your data center in terms of energy efficiency, you used to have one application for one dedicated server to make sure that application runs reliably and securely. Then came virtualization. Virtualizing your applications enabled you to host multiple applications on a single server, consolidating multiple servers to one for higher server utilization. Before virtualization, the server utilization percentage tended to be in the low teens, and multiple servers were required, raising power usage and costs. The next phase is automation of control. A bunch of virtualized servers do not form a cloud, but automation does. With proper management and automation, your virtualized data center becomes a cloud data center.

While I was thinking about this, Greg Ness, Chief Marketing Officer, Vantage Data Centers, sent me a link of a panel he moderated on this very topic. His session is listed at the bottom of the page. The video runs a little less than 30 minutes. Usually, you cannot do much in 30 minutes, but Greg covered several topics, including the definition of a cloud, public vs. private clouds, security, economic issues, and the future direction of IT as it relates to clouds. When you pack so much into a short session, the audience usually loses sight of what’s going and does not get much out of it. But Greg did a good job and moderated the panel to express very useful information.

The following were on that panel:

Moderator:

  • Greg Ness, VP Marketing, Vantage Data Centers

Panelists:

  • James Barrese, CTO, PayPal
  • Winston Damarillo, Cofounder and CEO, Morphlabs
  • David Nelson, Chief Strategist, Cloud Computing, Boeing
  • Paul Strong, CTO, Global Field and Customer Initiatives, VMware
  • Don Pickering, CEO, OneOcean

I will summarize the panel discussion with my comments below. If you want, you can watch the video here. Greg’s session is listed at the bottom. Unless indicated specifically by “ZK,” the following opinions are the panel’s.

Greg started the session by quoting Gartner’s observation that cloud computing is now moving from the height of inflated expectations into the trough of disillusionment in its hype cycle. (ZK: Gartner’s analysis was published in late 2010. The hype on cloud computing started with public cloud and software as a service (SaaS). In a way, public cloud is IaaS [computing first followed by storage]. Then came private clouds and platform as a service (PaaS). In 2010, neither private clouds nor PaaS were given much attention, though.)

The definition of clouds: The first topic was the definition of cloud computing. (ZK: In many panel sessions on cloud computing in the past, the discussion ended here.) Cloud computing features include fast provisioning, pay-as-you-go, and instant scale up/down. The consensus of the panel is that cloud computing is not a new technology but a new business model for the delivery of IT services. (ZK: No disagreements and so far so good.)

Public vs. private clouds: The next question was the difference between public and private clouds. Simply put, private clouds are an attempt to mimic public cloud technologies and the operating model, which are placed outside the enterprise, inside the enterprise. What private clouds could change includes faster provisioning and instant capacity adjustment. For example, the provisioning time was cut from, say, nine months, down to an instant. Once a new operating model is adopted and it works better than the previous one, no one wants to go back to the earlier model. The economic side plays a role in the adoption of private clouds as well. Add to this the process changes due to the new operating model, which makes both developers and guys in charge of architecture work better, and you understand why enterprises are embracing private clouds.

(ZK: In this segment, they did a good job of listing the merits of private clouds, and most people probably agree with their opinions.) Now the discussion got more interesting and covered why and how enterprises move their computing from public clouds to private clouds, as Zynga recently did. (ZK: This is something new to me, and I found their opinion very interesting.) The bottom line is ROI and the utilization of the assets in question. If your utilization of the asset is high, you should own it. If not, you want to outsource it. A good analogy is the ownership of a car. Unlike city dwellers, most suburbians like me own their own car, mostly because they use it daily. However, city dwellers can use other means of transportation and may not use a car daily. So it makes sense for suburbians to own a car and for city dwellers to take a taxi when needed. This is very reasonable and easy to understand.

Unfortunately, our life is not that simple. How do we handle spiky load increases, as in the case of disaster recovery or training, which may not happen very often? That is why we need a hybrid approach that uses both public and private clouds. The panelists advised using private clouds for predictable loads because it is less expensive to do so and using public clouds for spiky and dynamic loads. (ZK: This sounds good but here’s a problem. When people talk about a hybrid cloud, they talk as if virtual machines (VM) running on a private cloud can be easily and seamlessly moved to a public cloud. But as far as I know, it is not that simple. VMware solutions are dominant in the enterprise market, and Amazon’s AWS solutions control the public cloud segment. There are a few variants of cloud file formats, and those used by VMware and Amazon AWS are not compatible. Unless the file formats are translated, moving VMs between public and private is mere theoretical talk.) That is why one of the panelists mentioned the need for cloud standardization.

Another point that was interesting was the simplification and specialization of computing patterns. In a typical large enterprise, there are something like 6,000 applications. Each application needs to be:

  • sourced
  • designed
  • integrated
  • deployed
  • managed
  • monitored
  • archived

To do this massive task, for the past 25 to 30 years, 80–85% of the cost has been dedicated to software and those who manage software. Cloud changes this. For example, companies like eBay and PayPal have only a few patterns, which cuts the cost of maintaining them. When you do this, your opex primarily becomes energy, and the issue becomes where you run your loads and how you optimize the power/infrastructure ratio.

At this point, Greg interjected an interesting fact drawn from his discussions with area experts. When your power consumption is less than 500 kW, it makes sense to outsource your computing to a public cloud. But if your consumption goes over 500 kW, it becomes more economical to have your own infrastructures (i.e., private clouds) so that you can tune them for your computing needs.

Security: (ZK: In almost any cloud discussion, security is mentioned as the #1 inhibitor of the adoption of clouds.) Is cloud security a solved matter? One of the reasons some people prefer a private cloud is because it appears to be more secure than a public cloud. Cloud security really comes down to data security. Cloud security needs to address regulatory requirements according to geographic locations. But they are being addressed. Cloud needs to be more secure. It is like spreading peanut butter over a slice of bread. If you spread it, it gets thinner. If you concentrate your security effort on one spot (a cloud) by an expert that does this as their core business every day, rather than spreading it over a mass (many organizations whose business is not security), security works better on the concentrated one spot.

The future of cloud: Some vertical clouds with specializations like financial, health care, specific compliance requirements (e.g., aviation), and government data will emerge. Who will dominate the cloud market, big companies like Oracle or small startups? So many different patterns are required to satisfy the variety of users. And one company probably cannot accommodate such a diverse set of requirements. For that, open-source clouds, such as Openstack, need to be taken into consideration. Although Openstack may be only 80% completed, it supports multiple technologies, such as hypervisor, work processes, and patterns.

In the future, when an enterprise user indicates what their core business is, a service provider like VMware will provide everything that is not strategic to its business and let the company do their business at their core. PaaS will be a differentiator when both SaaS and IaaS become commodities with little differentiation. An ecosystem will be formed around PaaS with specific IPs.

In 5 to 10 years, a lot could happen, such as development of massive amounts of memory/storage, low-power servers (e.g., 10 W computing), software-defined networks, support for multiple devices and platforms, global software development collaboration via pipeline, and standardization. These will change the way software with PaaS is designed and deployed, for sure.

(ZK: Even if you have read this far, I still recommend watching the video because I omitted some points and paraphrased some nuances. The video has a lot of subjects that are current, and you can understand where cloud computing stands at this point. Moreover, I sometimes moderate panel sessions, and I think I learned a lot about how to do that from this panel.)

Zen Kishimoto

About Zen Kishimoto

Seasoned research and technology executive with various functional expertise, including roles in analyst, writer, CTO, VP Engineering, general management, sales, and marketing in diverse high-tech and cleantech industry segments, including software, mobile embedded systems, Web technologies, and networking. Current focus and expertise are in the area of the IT application to energy, such as smart grid, green IT, building/data center energy efficiency, and cloud computing.

, , ,

3 Responses to What CIOs Need to Know about Cloud Computing

  1. frank July 5, 2012 at 7:31 am #

    I watched the video entirely. Thanks for mentioning it. I feel you have given a valuable information regarding cloud computing and you shared its history too how cloud computing got hype like this. i really enjoyed reading your post

    • Zen Kishimoto
      Zen Kishimoto July 5, 2012 at 5:08 pm #

      Frank,

      Thanks for the comment. It is good to know someone appreciates what I write.

      Zen

Trackbacks/Pingbacks

  1. Outages and The Weakonomics of Public Cloud « ARCHIMEDIUS - July 3, 2012

    [...] Dr. Zen Kishimoto talks about the 500kW threshold theory when reviewing the recent Fire Panel on Cloud Computing: [...]

Leave a Reply


*