Need for Speed
The cloud has been getting a lot of attention for years now. And there is evidence that actual cloud use is really starting to pick up.
I just presented at the Cisco Mid-Atlantic Users Group (C-MUG) on the network impact of the Cloud. At the beginning of my presentation, I asked for a show of hands as to who was doing each of the four primary forms of cloud: SaaS (software-as-a-service), private cloud, public cloud, and hybrid cloud.
There were a lot of hands in the air! With everybody getting into the act, I thought it would be valuable to warn of common cloud-related pitfalls to avoid.
The first big mistake is not doing the cloud. Sure, the cloud has risks and challenges, but if you’re not trying to use it, you’re giving away potential cost and agility advantages to your competitors, and possibly losing the opportunity to benefit from the Cloud.
Actually, chances are you are doing the cloud, to some degree. Software SaaS platforms are gaining in “shadow IT” use, for human resources (hiring, payroll, etc.), sales (SalesForce), and even data (sharing marketing content, slideware, etc.). Which leads to…
The cloud is a great enabler. However, you need a balance between the agility of departments’ self-servicing and corporate needs. For example:
Departments can usually manage their own budget and SaaS costs. It’s rarely a problem unless there’s a bad hand-off to IT’s budget, or if the departments don’t manage their cloud services well.
Some bigger concerns revolve around:
That is, if the development team is working in the public cloud, you might save money by halting unused instances not in active use and disposing of instances when done with them. Disposing of no-longer-needed storage would also save money. On the one hand, we don’t want to tie up staff over petty costs. On the other, leaving 300 server instances running for a “big data” project when no one is actually doing anything with them – well, that might be worth taking the time to shut things down. Using the cloud vendor’s API or other tools to automate the start/stop process might be an efficient way to do this.
A related area of risk is programming APIs for spinning up cloud instances. A little care is required in making sure that a trail of spun-up instances is not left behind, costing money. The story of the Sorcerer’s Apprentice comes to mind as a vivid example of what you do not want your developers doing when new to the Amazon API.
Lifecycle is a big concern. I’ve been in datacenters where 10% of the physical servers were powered on with no network connections. People tend to forget about old servers. After all, there are more pressing new applications to spin up. But then the old physical servers never seem to get decommissioned.
That’s in a datacenter where you can see them, if you physically audit racks. Lights on, no network connection – that’s a real giveaway. Well, decommissioning server instances in the cloud, and whole virtual networks of servers in the cloud, are both worth considering. Is someone managing such cloud costs? How do you spot “cloud zombie server instances”?
To put it concisely, $(lease) > $(own).
That is, the cloud is great for agility, cloud-bursting (adding large numbers of servers in a hurry to accommodate demand), and for hedging risk. But it costs more than a properly used physical datacenter. You pay the premium when speed matters.
For example, buying a costly high-end server chassis for a six-month development effort incurs cost that has no justification after six-months. Furthermore, it means waiting for procurement to buy the server, waiting for delivery, getting it racked and configured, getting it networked – lots of delay. Using the cloud could be a lot faster. The cost comparison, I’ll have to leave to the reader.
Buying a server that’s going to get three or more years of solid use might be cheaper than using the cloud.
Using the cloud allows you to ramp up capacity, and ramp it back down quickly – perfect for those whose businesses are subject to unpredictable spikes and dips in demand. I believe one online company got caught building out a physical datacenter at great cost, just as its business dropped off in a major way. Having a fire sale on slightly used servers is unlikely to recover the purchase costs, let alone installation and datacenter costs.
One also has to understand Service Level Agreements (SLAs), or lack thereof. You control a server in your datacenter. That’s usually a good thing in terms of solid uptime, security, etc. If you stand up a server instance in the cloud, it may be competing for use of the hardware of the server it is running on, and it may be competing for network bandwidth. Uptime and security are probably pretty good, but are there guarantees? If there’s a problem, does the guarantee do anything for you, other than getting you an apology and a pittance off your next bill?
You may not want cloud vendor lock-in — the Hotel California syndrome of your data checking in and being too expensive to get back out.
A similar form of lock-in is building applications using a particular vendor’s tools. How portable is that going to be?
This is where I see Cisco InterCloud as being advantageous, or at least an interesting forward-thinking approach. If you can use OpenStack and other tools to avoid being locked into a particular hypervisor or cloud vendor’s approach, that might be advantageous. And with Hybrid cloud, Cisco is pushing the notion of being able to readily move applications and components around, between in-house, public, and private cloud. Of course, one could then perhaps reason that you’re locked into Cisco’s InterCloud tools.
In a subsequent blog, we’ll continue our discussion of common mistakes that companies make in the cloud, starting with their typical errors with keeping their data secure. In the meantime, for a deeper conversation about how to use the cloud to maximum advantage for your organization, just reach out.
Comments are welcome, both in agreement or informative disagreement with the above, and especially good questions to ask the NFD9 vendors! Thanks in advance!
Hashtags: #cloud, #mistakes, #intercloud, #CiscoChampion
Need for Speed
Container-Based WAN Monitoring
What is NVMe and How Does It Impact My Network?
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.