Data Center Monitoring

Author
Terry Slattery
Principal Architect

I’ve been doing Data Center design recently and the customers are interested in what to monitor and why it should be monitored. At first, a data center is just another set of edge devices, albeit servers, connected to a set of access switches. But the traffic patterns and usage in the data center is much different than at user edge ports. Historic traffic patterns were between clients and servers, often called ‘north-south’. But with the distribution of applications between web front ends, middleware servers, and back end databases, there is frequently a lot of server-to-server traffic, called ‘east-west’.

I think it is important to know the most active servers, particularly as VM usage allows multiple applications to run on each server. In this context, a server is a hardware platform that has some number of CPU sockets, each with some number of cores, memory and I/O capability. Each blade in a blade server is what I would call a server (an ESX Host) because the sockets and cores on the blade share I/O, just as they would if it were a standalone chassis.

As VMs move from server to server, the mix of disk I/O and network I/O changes, depending on the application that is being run on each VM that is hosted on the server. If you don’t monitor each server’s network port, how do you know that the mix of applications is congesting the network interface? Alternatively, how do you determine that a particular server is hosting applications (VMs) that are no longer in use? Decommissioning unused servers has always been a problem in large data centers, and with VMs, it is becoming more challenging to identify the unused application VMs. Yet these unused VMs consume resources, adding to data center growth and increasing the server management load.

What should be included in data center monitoring?

  • Server ports. Know when a server’s link is saturated. Basic interface monitoring is a start, but follow that with flow monitoring to report what traffic is the greatest. Be careful – some applications are designed to run at full link speeds, such as backups. This is where a server that is hosting too many VMs is detected – its network interface will be congested and the mix of traffic will be to/from the various VMs that are hosted on that server. Monitoring server ports can also be used to inventory unused ports or underutilized ports. Server ports, unlike most other edge ports, need to be monitored at a higher frequency, so that traffic microbursts can be identified, helping you understand the true network utilization profile. Polling for data at 10 minute or longer intervals just doesn’t work for this level of visibility. The minimum that I want for server port monitoring is 5 minute data and I would prefer 1 minute polling. This is certainly possible, as demonstrated by some vendors.
  • Uplinks. Most data centers will be designed with some level of oversubscription of the uplinks and knowing when the uplink is oversubscribed will help you move the VMs around the data center in order to maximize overall system throughput. In the longer term, it can help you determine where you need to add higher speed links or where to add additional parallel links (etherchannel or equal cost multipath).
  • Interface errors. All interface errors should be reported. In addition, report on discards, which indicates interface congestion. Other errors may indicate bad cabling or connectors. For fiber connections, look for either bad cables and connectors or dirty connections. This is also where you find duplex mismatches on interfaces where the server and network teams don’t always agree on the speed/duplex settings.
  • Traffic distribution on parallel links. Make sure that the etherchannels and equal cost multipath links are running at reasonably even traffic levels. An incorrect traffic distribution algorithm can significantly reduce network throughput, impacting business productivity.

Get a baseline of the above information so you know what it should look like. Refresh the baseline at least once per quarter (monthly or weekly is even better), allowing you to do trend analysis to predict bottlenecks and act to eliminate them. Add thresholds to generate alerts when traffic levels and errors exceed suitable levels. With regular baselines, you can identify new applications as they are rolled out on the network. You may even be able to identify that an application will not work well in a certain environment (e.g., over the WAN) before it is extended there. An don’t forget that the baseline may allow you to more easily identify malware and security problems and their origins.

I like the idea of using flow analysis (Netflow, sFlow, IPFIX) to identify server traffic patterns. Which servers are talking with which other servers and what is the traffic volume? Are some servers talking with servers in a backup data center, implying that the application is really distributed across multiple data centers? In this case, is there really a backup data center, or is it simply a distributed data center? What happens when (not ‘if’) connectivity is lost to the other data center? Does the application stop running and if it does, what is its impact to the business?

Data center monitoring is an important function of network management. While many of the above functions can be accomplished with today’s tools, there are some functions that cannot. The capability to do all of these functions is not far off. If we ask the vendors for these capabilities, all of which are possible with existing technology, then we will all benefit.

-Terry

_____________________________________________________________________________________________

Re-posted with Permission 

NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under http://www.infoblox.com/en/communities/blogs.html

infoblox-logo

Leave a Reply

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.