Log collection is essential to managing an IT department because it allows administrators to research historical events throughout a network. Therefore, it’s critical to understand a few key points about collecting logs; the why, and what. We’ll look at a few specific examples of collecting log events efficiently, like incorporating threat modeling to enhance our collection. Implementing log collection policies and procedures is as fun as watching anti-phishing videos. But at the end of the day, the effort put in at the beginning will be worth it.
The why and what
When it comes down to it, log collection is vital in determining what has happened on your network when a cyber incident has occurred. Events occur every day as a result of routine computer use. A user printing a document and saving a file on a network share are both events. Incidents are events that have occurred with unintended consequences. A user clicks on a link in an email, and a remote connection is made to the computer three minutes later. All incidents are events, but not all events are incidents. Therefore, it’s crucial to capture a complete set of event logs to understand the full story of events that led to an incident, including the events that happened after the fact.
When you analyze the logs within your environment, you’ll quickly find it similar to trying to sip from a firehose. There are a lot of events that occur on your network, and trying to manually analyze every event for signs of malicious activity would be a grueling, time-consuming task. That’s where threat modeling comes in. Just like in developing an incident response around your infrastructure, you can establish threat models about how an attacker might gain access to your network, and then ensure the related logs would be collected in this real-world event.
For example, many employees email and communicate with the general public daily. Clicking links in emails can end in a few different ways, including a remote connection established to your computer. But this connection doesn’t occur within a vacuum; the connection occurs at the firewall level and is forwarded to the computer through Network Address Translation. So not only would there be a connection log on the firewall, but there would also be one on the computer indicating a remote connection.
Following a similar thread, say a user plugs in a USB drive from home that contains a virus that autoruns when plugged in. You should collect these logs to determine a route vector for the virus. Still, Windows doesn’t provide a native mechanism for logging when USB devices get mounted on the filesystem, so you’ll need to look for a third-party logging solution and incorporate those logs in your analysis. By examining different potential scenarios within your environment, you uncover additional requirements. When you expose these requirements, it’s important to document them within the security policy as you go along. Some conditions may dictate how other requirements are implemented, such as compliance frameworks like PCI DSS that require firewall connection logs to be maintained for at least a year.
Optimal log collection settings
There are many tools out there that can collect and send logs (ahem NXLog) and then there are tools that can ingest these logs, analyze them for threats, alert based on that analysis, and then automate a response. While these tools are out of the scope of this post, it must be understood that they are only as effective as the information contained within. That’s why manufacturers like Microsoft give administrators tools to spice up log entries.
Microsoft has released a set of baseline security configurations for every domain-joined computer and server, as well as for specific applications. Check out the Microsoft blog for more information. For Windows Server 2016, they released a set of baseline security group policy objects that administrators could deploy within their environment. These objects contain hundreds of recommended security enhancements to workstations and servers to align them with best practices. Part of these best practices is that it enables audit logging and security logging, meaning it will provide detailed logging information for certain security events that occur within the environment.
Prioritize log flows
There are generally two problems facing investigators when they begin researching events: either there isn’t enough data, or too much data is being collected. Microsoft has released a set of Event IDs that you should plan to monitor for. These events include a potential replay attack occurrence, the audit log being cleared, account logins and logouts, and other similar events. Since this is a list of recommended events within a Windows Server environment, monitoring for these IDs and dropping everything else is safe. This is one example of reducing the log level without impacting your response effectiveness.
More devices on the network generate traffic, so it’s important to consider collecting logs from switches and internal firewalls as well. These devices can be monitored by sending traffic to syslog servers, which can then analyze and alert based on the data contained within the logs. Almost all log collection platforms offer some way to collect log records by receiving them on a dedicated port. This is an agentless method compared to an agent-based method of collecting logs.
Implementing log collection policies is an essential step in an organization’s journey on its maturity model, though each journey is unique. Events are routine and are generated for any number of innocuous reasons, whereas incidents are adverse events. All incidents are events, but not all events are incidents, and it’s essential to consider this fact when collecting and analyzing log data. By modeling threats in your environment, you can anticipate specific requirements or needs when collecting logs. You can discover caveats or pitfalls along the way or ensure you’re only logging the data you need to log and analyze for future research endeavors. Optimizing your log collection settings and prioritizing your log flows ensures your logging infrastructure is sound enough to satisfy any regulatory requirements.