Cloud Readiness – Part 2: Understanding Public, Private, and Hybrid Cloud

John Cavanaugh
Vice President, Chief Technology Officer

Many organizations are running headfirst into the cloud but have never established or documented the reasons.  Understanding your options, assessing your needs, and building a cloud strategy before migrating to a solution are critical to long term success. The first step is understanding the available options of public, private, and hybrid cloud solutions.

Public Cloud

There are many good reasons to move to a cloud infrastructure and we have seen many of these through our interaction with large clients.  Chief among these are the following:

  1. Data centers (DCs) periodically require significant upgrades due to security, compliance and regulatory concerns and large amounts of new data are creating performance issues
  2. Maintaining and managing data centers is not a core competency
  3. Enterprises need to react swiftly to market demand and need flexibility in hosting services
  4. Their client base is growing outside of existing geographic envelopes and developing new DCs in distant geographies seems ill-advised

So, with cloud we offload DC management and create an environment where the hardware underlay is managed by the cloud or SaaS vendors.  This simplifies facilities planning and moves an enterprise from a CapEx model to an OpEx model.

Private Cloud

At its simplest, a private cloud could be defined as a company’s on-premises DCs or collocation facilities. However, when you think about our reasons for going to public cloud (shown above), a private cloud such as legacy DCs seems to be contraindicated.

OpenStack, RedHat OpenShift and VMware became popular methods for in-house virtualization, but these did not initially address any kind of hybrid (public/private) use of public clouds.  Furthermore onsite vs. cloud toolsets are often completely different.

Hybrid Cloud

Hybrid cloud solutions are the result of addressing the public cloud solutions that became problematic.  As enterprises rushed into the cloud, they relieved themselves of many burdens related to infrastructure, but were sometimes surprised by cloud sprawl and in some cases excessive and unexpected OpEx costs.

George E.P. Box once postulated that, “All models are wrong, but some are useful.”  In this sense he basically argued that no solution is perfect.  Experience has taught us that public cloud concepts can be problematic from a cost or performance viewpoint in specific configurations.

A practical viewpoint is to create a hybrid setup that addresses the same issues that public cloud was trying to solve.  In this way, having a hyperconverged systems architecture that seamlessly connects to the public cloud becomes very interesting.

VMware’s acquisition of Pivotal Cloud Foundry and subsequent integration is an example of a path from the enterprise to the cloud.  An organization could develop and test on-prem or in-the-cloud and then model an application’s costs on an on-prem vs. various public cloud services basis.  For consistency, VMware is supported in the public cloud directly, but not everyone is happy with the VMware licensing and cost model.

Cloud vendors such as AWS compete with services such as AWS Outpost, where AWS places equipment in an enterprise’s DC or collocation facility, but it remains centrally managed through AWS and as a result uses consistent toolsets.

Choosing between Cloud Only and a Hybrid Cloud Architecture

You need to assess your data sources, location and processing requirements to determine the optimal cost model.

  • Where is your data collected, stored, and processed?
  • Who uses the data?
  • What is it used for?

As an example:

Imagine that your operations team decided their best cost model was to push all security logs into a cloud repository.  But they are only looking at their team’s costs.

Then your threat team decides that they need to examine those logs with an on-premises system (it’s best-of-breed from their perspective).

This results in a very expensive run rate with the cloud service provider (CSP) because they charge for exfiltration of data.  Any Fortune 500 firm produces petabytes of logs per month and the commercial rate for sending that data back to the SOC can very well reach over $100K per petabyte.

The hybrid approach would look at this issue holistically and could involve the SOC systems being hosted in the cloud, or an AWS Outpost being setup for the SOC where they see the raw data.  Moving data back and forth between a CSP’s managed hybrid offering and their own upstream storage is often cost advantaged over self-managed approaches.

Other examples abound for hybrid solutions such as AWS Outpost:

  • Low latency compute
  • Local processing (even with cloud bursting – this can be a cost play)
  • Data residency (storing data locally for regulatory purposes)

Migrating to the cloud is very complex. Application teams are often spread across many constituencies and, even when firms have them, enterprise and cloud architects rarely have the technical networking and security skills needed to look at the whole picture. Having the right partner, who can help you navigate your way through all the choices is key.

To learn more contact us about the assessments we can perform to address any concerns and improve your infrastructure.

Further posts in this series will explore these subjects and illustrate solutions.