Biggest Mistakes Companies Make in the Cloud, Part 2

Peter Welcher
Architect, Operations Technical Advisor

Last month we started a discussion of five of the Biggest Mistakes Companies Make in the Cloud, including failing to utilize cloud technologies at all, failing to manage cloud usage, costs, and life cycles, among other blunders. Here are nine more mistakes I see organizations making all the time, and some advice on how to avoid them.


As a startup learned painfully a year or so ago, guard the keys to the cloud well.

It’s very easy for someone with the right admin privileges to delete all of your server instances and associated data.

If your only backups are also in the cloud and use the same or equally compromised credentials, your company can vanish in minutes. Out of business, game over.

Maintain at least two sets of credentials, and compartmentalize responsibility so that no one person could wipe everything out, either by accident or maliciously. Consider dual authorization for drastic changes — if you can find a provider who offers that feature.

Worth pondering.


Security cuts two ways. You can use it unwisely, as an excuse for not doing cloud. Or you can take positive steps to get the security you need.

You do need to know the sensitivity of the data and apps you’ll be placing in the cloud. And take reasonable precautions, and install reasonable controls, for the sensitivity of that data and those apps.

Security in the cloud is just like other security issues. What are your business, legal, and other requirements? Are there adequate tools and controls in the cloud, or not?

Does FedRAMP or another cloud security-certification program meet your needs, by certifying the provider passed a certain level of security standard?

In short, where there’s a will, there’s a way:  If you tackle security one concern at a time, you can likely make a fact-based determination of how to use public cloud securely — or know what the shortcomings are.


Private cloud is likely using physical cabling with some configuration, possibly automated.

Public cloud shares the physical server, and uses virtual cabling whose configuration is almost certainly automated.

Which one is more likely to suffer from human error?

I would be inclined to bet that the big automated provider is likely to be more secure. You should do your own due diligence though.


Here in the Washington, D.C. area, private cloud is big business, likely because of security and control concerns. So it’s good that people are at least trying to do cloud, and learning.

If what you want is yet another datacenter, without having to build it out, that’s where basic private cloud comes in: colocation servers without buildout. If you can get adequate control and security, rented colocation space is about what private cloud can end up being.

That may solve some immediate problems for you. It provides some agility, at a price. It doesn’t provide the same degree of agility as public cloud, because you may have to have a longer-term contract, you may have to wait for provider build-out, etc.

We suggest that you still try to make real use of Amazon, Google, or one of the other public cloud providers. And if nothing else, take a look at their service catalogs.

If you’re doing private cloud, do you have a strategy for automating self-service, collection of costing information, and chargeback? How about automation of server instance lifecycles as well?

If you don’t, you’re missing some of the power of cloud. IT staffs keep acquiring more and more responsibilities, with constant or shrinking budget. Automation and self-service offload some of the routine work, freeing up staff time.


Are you thinking about the impact of cloud on your network? It may not be huge, but how could your network work more efficiently with the cloud? See also my posted presentation, at The Four Kinds of Clouds, and How Each Affects Your Business.


It’s best to have a dashboard showing cost, maybe with some (development) instances subject to being cut off if you’re exceeding cost thresholds. If your cell phone provides a meter showing how many of your monthly bytes have been consumed, don’t you want the same for server instance minutes, storage consumption, and of course, greenbacks?


To maximize the value, performance, and robustness of your web apps, new coding approaches are needed. So while you may be able to shift current applications to the cloud, new application development offers the opportunity to understand new ways of doing things. You can send staff to training, but there’s nothing that tops bringing one or two people who have been there and done that.

We view this as a variation of leveraging consultants and outside talent wisely. If you don’t bring in outside points of view, your in-house staff can become very stale, and not even realize they’re set in ways that became obsolete five or 10 years ago. We’re not about to espouse the “Chairman Mao theory of staff management” here – but intentionally providing new people, new approaches, and change may be the best way to get staff to open up their minds and to build new skills.


For executives, a cloud security problem could be career limiting. Failure to leverage cloud technology for agility and cost-savings could be a problem as well, albeit a less potentially spectacular one!

Networking professionals might think about the job implications. If you’re a rack-and-stack kind of person, you may find your skills less needed by the average enterprise if and when most servers run in the Cloud, although perhaps you’ll still needed by large private and public cloud datacenters.  You did notice that, over time, WLAN may replace much of the user side cabling plant, didn’t you?

If you understand networking concepts well, you may have a future role even if they become virtualized or in the cloud. People with a solid understanding of fundamentals, particularly across boundaries – e.g. VMware + solid storage, or Cisco + VMware networking – will likely be in demand for a while. A similar good skill set might be to understand hybrid cloud techniques, and how the various cloud or hypervisor approaches to virtual networking align or differ.


If your datacenter and most of your 2,000 or 10,000 users are within 10 to 20 miles of each other, you need to pay special attention!

Putting an application into the Cloud means you may be adding latency: the physical datacenter is probably further away from your users than they are from each other. That may not matter much in urban areas. However, adding a skoosh of latency is a great way to find poorly written applications, i.e. chatty applications that require a lot of back and forth traffic to get the job done. In-house latency validation of applications could help.

If you’re considering geographic diversity within the cloud, bear in mind that running your applications in Texas rather than Virginia may be rather noticeable to your Washington D.C. / Virginia users. Testing can spare you a hard lesson learned. You don’t want to be known as the IT Manager Who Redefined Slow!

The cloud is a great tool for managing data cost-effectively but it’s important to be careful about the way you move into it. For a longer conversation about common cloud mistakes, feel free to reach out.


Comments are welcome, both in agreement or informative disagreement with the above, and especially good questions to ask the NFD9 vendors! Thanks in advance!

Hashtags: #cloud, #mistakes, #intercloud, #CiscoChampion

Twitter: @pjwelcher

Disclosure Statement
Cisco Certified 15 YearsCisco Champion 2014




Leave a Reply


Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.


Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.


John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.