Designing a Logging Solution

Terry Slattery
Principal Architect

The tutorial by Marcus is quite long – over 200 pages, but it contains a lot of really useful information about how to process log files (syslog in particular). While his viewpoint is from the perspective of security analysis, he makes many points that also apply to network management.

I like his philosophy:

Logs are just data… Processed and analyzed, they become information.

It is something that I’ve been talking about for network management in general. An NMS will collect more data than anyone can possibly view. Analysis is required to turn that data into information. Another point that he makes is that the information should be actionable intelligence, which means that the information has immediate and practical importance. Saying that another way, the information that is created should be used to take some action, such as replacing a bad power supply, investigating and correcting an interface that is showing high errors, identifying memory parity errors that indicate a pending hard failure where the board or device needs to be replaced, etc.

His tutorial lists eight common mistakes right up front, which allows you to easily learn what he has found that doesn’t work. I really like #4: Only looking for what you know you want to find instead of just looking to see what you find. When I’m working with a customer and we’ve implemented the syslog summary script, I like to review the summary results periodically, and a daily review is best. I use this process to find events that would otherwise be missed. The result is that I am able to find and correct problems, which results in a continually improving network.

Marcus suggests performing log analysis on a variety of systems, starting with firewalls, web servers, etc. These systems are recommended because they are good sources of security data, making this tutorial good to share with your server and security teams. For network management, I like to send the router and switch logs to both the security log server and to an NMS server. I do this so that the network team doesn’t have to look through the large volume of server events and security events when looking for network problems. I also find that the security team often doesn’t want the network team using their systems.

One of the interesting logging sources that Marcus recommends is the DHCP server. The reason is that it records where each DHCP device was located in the network. That also suggests that the network design should incorporate a solid DHCP server architecture that has an accurate clock so that DHCP request log messages have accurate timestamps. (My recommendation/advertisement: Infoblox DHCP servers make great NTP servers.)

Marcus says that the default syslog program on most Unix/Linux systems is not very useful, so he describes three replacement logging packages that are available. The logging package that I prefer is syslog-ng, which is very functional, easy to configure for doing simple processing, and has both a free and a ‘Professional’ (i.e. supported for pay) version. Its ability to send data to multiple destinations is how I send router and switch log messages to both the NMS and to the security logging server.

The description of different tools and methods for analyzing the collected log data is extremely useful. He describes how the process of artificial ignorance works on pp 142 of the tutorial. It is pretty clever and he shows an example of what it can yield.

Finally, Marcus wraps up with a set of ‘laws’ that he has developed. I really like his last one, which I’ll have to use at sites that insist on real-time notification of events:

Ranum’s Fourth Law of Logging and IDS – It doesn’t matter how real-time your IDS is if you don’t have real-time system administrators.

Look at your processes and see if you really have real-time processes. If you don’t, there is no real need for real-time notification of problems.



Re-posted with Permission 

NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under


Leave a Reply


Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.


Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.


John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.