#ExpressRoute

The Concept of Cloud Adjacency

datacenter.jpg

Several years ago, when we first started theorizing around hybrid cloud, it was clear that to do our definition of hybrid cloud properly, you needed to have a fast, low latency connection between your private cloud and the public clouds you where using. After talking to major enterprise customers we theorized five cases where this type of arrangement made sense, and actually accelerated cloud adoption.

  1. Moving workloads but not data - If you expected to move workloads between cloud providers, and between public and private to take advantage of capacity and price moves, moving the workloads quickly and efficiently meant, in some cases, not moving the data.
  2. Regulatory and Compliance Reasons - For regulatory and compliance reasons, say the data is sensitive and must remain under corporate physical ownership wile at rest, but you still want to use the cloud for apps around that data (including possible front ending it).
  3. Application Support - A case where you have a core application that can’t move to cloud due to support requirements (EPIC Health is an example of this type of application), but you have other surround applications that can move to cloud, but need a fast, low latency connection back to this main application.
  4. Technical Compatibility  - A case where there is a technical requirement, say you need real time storage replication to a DR site, or very high IOPS, and Azure can’t for one reason or another handle the scenario, in this case that data can be placed on a traditional storage infrastructure and presented to the cloud over the high bandwidth low latency connection.
  5. Cost – There are some workloads which today are significantly more expensive to run in cloud then on existing large on premise servers. These are mostly very large compute platforms that scale up rather then out. Especially in cases where a customer already owns this type of equipment and it isn't fully depreciated. Again, it may make sense to run surround systems in public cloud that run on commodity two socket boxes, wile retaining the large 4 socket, high memory instances until fully depreciated.

 

All of these scenarios require a low latency high bandwidth connection to the cloud, as well as a solid connection back to the corporate network. In these cases you have a couple options for getting this type of connection.

  1. Place the equipment into your own datacenter, and buy high bandwidth Azure express route and if needed Amazon Direct Connect connections from an established Carrier like AT&T, Level3, Verizon, etc.
  2. Place the equipment into a carrier colocation facility and create a high bandwidth connection from there back to your premise. This places the equipment closer to the cloud but introduces latency between you and the equipment. This works because most applications assume latency between the user and the application, but are far less tolerant of latency between the application tier and the database tier or between the database and its underlying storage. Additionally you can place a traditional WAN Accelerator (Riverbed or the like) on the link between the colocation facility and your premise.

 

For simply connecting your network to Azure, both options work equally well. If you have one of the 5 scenarios above however, the later options (colocation) is better. Depending on the colocation provider, the latency will be low to very low (2-40ms depending on location). All of the major carriers offer a colocation arrangement that is close to the cloud edge. At Avanade, I run a fairly large hybrid cloud lab environment, which allows us to perform real world testing and demonstrations of these concepts. In our case we’ve chosen to host the lab in one of these colocation providers, Equinix. Equinix has an interesting story in that they are a carrier neutral facility. In other words, they run the datacenter and provide a brokerage (The Cloud Exchange), but don’t actually sell private line circuits. You connect thru their brokerage directly to the other party. This is interesting because it means I don’t pay for a 10gig local loop fee, i simply pay for access to the exchange. Right now I buy 2 10gig ports to the exchange, and over those I have 10gigs of express route and 10 gigs of Amazon Direct connect delivered to my edge routers. The latency to a VM running within my private cloud in the lab and a VM in Amazon or azure is sub 2ms. This is incredibly fast. Fast enough to allow me to showcase each of the above scenarios.

 

We routinely move workloads between providers without moving the underlying data by presenting the data disks over iscsi from our Netapp FAS to the VM running at the cloud provider. When we move the VM, we migrate the OS and system state, but the data disk is simply detached and reattached at the new location. This is possible due to the low latency connection and is a capability that Netapp sells as Netapp Private Storage or NPS. This capability is also useful for the regulatory story of keeping data at rest under our control. The data doesn't live in the cloud provider and we can physically pinpoint its location at all times. Further this meets some of the technical compatibility scenario. Because my data is back ended on a capable SAN with proper SAN based replication and performance characteristics, I can run workloads that I may not otherwise have been able to run in cloud due to IOPS, cost for performance or feature challenges with cloud storage.

 

Second, I have compute located in this adjacent space, some of this compute are very large multi TB of RAM, quad socket machines running very large line of business applications. In this case those LOB applications have considerable surround applications that can run just fine on public cloud compute, but this big one isn’t cost effective to run on public cloud since I've already invested in this hardware. Or perhaps I’m not allowed to move it to public due to support constraints. Another example of this is non x86 based platforms. Lets say I have an IBM POWER or mainframe platform. I don’t want to retire it or re platform it due to cost, but I have a number of x86 based surround applications that I’d like to move to cloud. I can place my mainframe within the cloud adjacent space, and access those resources as if they too were running in the public cloud.

 

As you can see, cloud adjacent space opens up a number of truly hybrid scenarios. We’re big supporters of the concept when it makes sense, and wile there are complexities, it can easily unlock move additional workloads to public cloud.