There seems to have been a resurgence in noise around “Next Gen SIEM,” a term that has been bandied around for a few years. Which might make you wonder if the “Next Next Gen SIEM” is imminent. But all joking aside, it makes sense that SIEM (Security Information and Event Management) vendors are pushing this concept, even though it is self-serving. There are a lot of invested parties: compliance departments, security teams who have centralized their security incident response workflows on SIEMs and have been trained to contribute and manage an ever-growing library of rules and of course the vendors themselves. It’s a huge market that has unfortunately not lived up to its potential, with an understandable interest in reinventing it in order to stay relevant.
Rather than blindly follow the herd, this is the time to step back and ask: Is the Next Gen SIEM the future for security monitoring or is there a better option ?
As you know, SIEMs generate alerts to notify security teams about issues by aggregating logs from many sources and performing simple correlation. Therein lies the problem. The information in logs is limited. It’s like a phone bill that lets you know when a phone call was made, to which number and for how long. But it doesn’t tell you about the conversation. The conversation has the information on if there is a security event worth spending time on, and what your short-staffed security teams should focus on. SIEMs are good at aggregating modest amounts of data from many disparate systems. Security teams can write basic rules to correlate on known indicators and then report on the results. SIEMs are not good at detecting unknown attacks, analyzing massive amounts of data or understanding network and user behaviors.
Network traffic is the conversation in this analogy. It has a wealth of information that can be used to ensure the security team is focusing on the issues that matter. While logs provide useful context, you shouldn’t assume they provide the complete picture or are accurate. How much or how little is written to a log is configurable which directly impacts the useful security information you can deduce from it. And just because there are no clues in logs, doesn’t mean an attack didn’t happen. The Destover malware (best known for erasing data across Sony workstations) changes file timestamps and erases logs. Only a full forensic analysis would have revealed the log wiping and that’s not something you can do with your SIEM.
The network doesn’t lie and should be your source of truth. Now I’ll grant that there used to be hurdles to get insights from network traffic. Legacy enterprise solutions that perform capture and analysis were (well, they still are) challenging to deploy and expensive. Their appliance-based approach required organizations to restrict the deployment to only a few network segments within an enterprise, which in turns limited visibility. And the cost to retain full PCAP data was prohibitive. Logs were a much easier starting point for getting security insights. All appliances generate logs, which are easy to collect and don’t take up much space.
But what if that were no longer the case? What if it were easy to capture network traffic and retain it for as long as you needed to? That would change the whole equation. You wouldn’t be making inferences from the logs of a phone call but rather the entire conversation. You can understand what was said and make better decisions.
What about an approach that makes it easy to use PCAP data to deliver on the unfilled promises of a SIEM? What if you could analyze network traffic for workloads not only within the traditional enterprise, but also cloud architectures — and what if you can capture and store indefinitely without breaking the bank? Suddenly the possibilities are endless. Threat hunting becomes much easier. And a whole new variety of detection techniques can be used to find threats – both “known” which can be identified via rules and signatures and “unknown” which can’t be identified by rules/signatures alone but also require behavior analysis (e.g., an advanced persistent threat).
Which brings me back to threat detection and SIEMs. SIEMs require rules to be written for correlation, which means they only detect threats you know about. There are a number of smaller companies that are performing behavioral analytics on logs to detect the unknown threats. UBA or UEBA is the acronym du jour for this approach and there is a good amount of interest in it. If history is any indicator it’s very likely that these companies will be acquired by SIEM vendors as they beef up their offerings. But once again detection accuracy is limited by the data source used: logs. Applications write just enough log information to support diagnostics. And even if the logging level is made verbose, a significant amount of information (which can be found in packets) is missing. So you end up with an incomplete view of reality.
Some SIEMs are using PCAPs to do rudimentary analysis when the rules trigger a detection. I’ll ignore the acquisition and management costs associated with the appliance bloat needed to store PCAP data and focus on another issue. SIEMs came about two decades ago. Making them process the massive volume of network data will highlight the performance shortcoming of their architectures.
SIEMs are ideal for coordinating the security operations workflow process. However what is needed is a solution that provides a clear view into the modern attack landscape and the related network and user behaviors. SIEMs, even next gen ones, cannot. The benchmark against which you should evaluate all solution should be the visibility that is possible by using network traffic. Change can be unnerving but it can result in a dramatically better way of detecting threats.
(from IDG Communications, Inc.)