Need for Speed
“No man is an island.” – John Donne
This is from a famous poem (Devotions upon Emergent Occasions) written by John Donne in 1624. It was meant to convey the connected nature of mankind — that we are each part of a larger whole.
What does this have to do with cloud readiness? Everything. Clouds are about hosting applications and in most enterprises, application flows and interdependencies are poorly understood.
Legacy applications are rarely stand-alone systems. In enterprises, these applications were built over a period of years and are highly connected and interdependent. Mapping a set of application flows can be complicated and the resulting diagrams can look like a Rube Goldberg device.
Separately, there is another issue — cloud connectivity. A CIO once asked me about a performance problem one of his teams was having with Amazon: “I have an internet connection and it’s not saturated; why is Amazon blaming our network?”
Could it be the network?
Yes, because sufficient bandwidth is not enough. Even basic connectivity requires analysis. His company was connected to a small regional provider, who in turn connected to a pair of larger providers, who connected to tier 1 providers, who were connected to Amazon.
Do you see the issue? They have service-level agreements only with the provider they were paying for connectivity — the small regional provider. The chain of connectivity from that point to Amazon was out of their control.
What else could it be?
A problem with application flows can look like a network issue. To explain, consider that as we migrated to virtualization and containerization within the enterprise, the successful projects were often written for the new environment. But for a moment, let’s look at the failures.
Failed projects took a piece of an application and virtualized it separate from the rest of the components. This is not horrible in and of itself, but what if the virtualized environment is in a data center geographically remote from the rest of the components housed in a legacy data center?
The failed implementations required packet flows between the old and new environments that were previously collocated. Depending on the distance and numbers, this could add up.
In one case, the issue was performance in the (partially) virtualized system. The application was several seconds slower, and this supported an online web-based system. When I pointed out the latency issue, it was initially dismissed. After all, the data centers involved were only 40ms apart.
However, detailed investigations showed that the number of packets involved was far larger than initially thought and the data transfer was using TCP. The TCP protocol requires acknowledgements (TCP sends a window, then waits on an acknowledgement before sending the next window or resending the current). This can be exacerbated by poor MTU management, link quality issues, and other errors.
Because the application was only partially virtualized, the packet flow was going in and out of the data center where the virtualized system resided. This “trombone effect” in the flow was killing overall performance.
The moral of the story is when we discuss moving items to the cloud, we must remember that while the term is an abstraction, the actual systems supporting our apps live on real physical servers and infrastructure somewhere.
Where that somewhere is located and how we connect to it are important. These are details that cannot be abstracted.
If we solved the connectivity issues with the cloud, what could be moved there on day 1?
Software as a service
Applications such as Salesforce run in the cloud. Salesforce specifically has its own cloud infrastructure that supports its distributed application suite on several geographically diverse data centers. Each of these is interconnected with a variety of mobile and internet providers. The result is that many companies run Salesforce completely separate from their internal IT infrastructure.
Many examples of this exist: Firms are taking their internal ERP and CRM systems offline in favor of NetSuite, moving email to Office 365 or Google, etc.
There are also app providers who develop and host apps on general purpose cloud platforms, such as Amazon’s AWS offering.
This form of cloud could be viewed as a hosted application model. It permits companies to start removing the internal apps that are not core to their business (likely candidates are payroll, HR, CRM, ERP and even email).
These are special purpose applications with no interdependence on other enterprise applications. The exception could be one-time flows, such as use of a single sign-on system for credential management, but the rest of user’s application flow should occur entirely within the cloud.
Intact application suites
These are, as the name implies, a set of applications that work as a unit. Think of a typical financial management suite — general ledger, accounts receivable, and accounts payable. Each of these major systems may be made up of components. The AP system may have a check-writing system as well as application and connectivity to banking payment systems.
An intact system would be defined as a grouping of these component applications that works together as a unit and collectively looks and appears as a stand-alone application.
So how do I know if I am cloud ready?
You need to assess your systems. An initial cloud readiness assessment would look at the following:
This would be sufficient for analyzing and remediating deployment of SaaS and stand-alone applications. To get past this stage would require an application cloud readiness assessment, which would need to understand the full mapping of all the flows between all components and subcomponents in an application suite.
Imagine a large complex legacy app that migrates 99 percent of its components to the cloud. Sounds perfect? The 1 percent example might be good if it was a small part of the data flow, but not if it was a client database that required significant flows at several stages of the process.
This subject is very complex and is often problematic, because application teams are spread across many constituencies and, even when firms have them, enterprise architects rarely have the technical networking skills needed to look at the whole picture. Having the right partner who can help you navigate your way through all of the choices is key. Learn more about the assessments we can perform to address concerns and improve your network.
Further posts in this series will explore these subjects and illustrate solutions.
Need for Speed
Container-Based WAN Monitoring
What is NVMe and How Does It Impact My Network?
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.