Rethinking SIEM requires rethinking visibility

Security professionals now generally recognize that siloed security tools and systems have undercut efforts to find active attacks more quickly and efficiently.

rethinking SIEM

Information security began decades ago with strategies of taking a layered approach and even relying on a heterogeneous mix of vendors. This meant that desktop or endpoint solutions were separate and from different manufacturers than those for gateway or cloud. While the underlying tenets of not relying on a single vendor and taking advantage of best-of-breed expertise for each system or tool is still valid, it has become obvious that data needs to be combined to understand the complete attack surface and progression of the kill chain.

SIEM was created over fifteen years ago to integrate security data for providing real-time analysis of security alerts generated by applications and network hardware. Admittedly, there was too much reliance on log data and not a complete enough representation from all parts of the attack surface or assets being protected, but SIEMs have provided significant value. Still, they have not solved the escalating problem of attacks and breaches and the problem of far too many false positives that can result in inefficient SOC operations and severe penetration loopholes.

Other silo security solutions have come to market to address these deficiencies, including Extended Detection and Response (XDR), Network Traffic Analysis (NTA) and User and Entity Behavior Analysis (UEBA). At the same time, companies are rethinking the SIEM to make it more effective. While each of these represents progress, each is contingent on getting real-time or near-real-time data from across the entire organization. The principle is similar to one of the early computing maxims of garbage in, garbage out. These systems can only be as good as the data they ingest.

Visibility is the key, but visibility is not one-dimensional consideration; there are multiple aspects to consider. The first is breadth or coverage. It’s important to be able to see the signs of any East-West and North-South activity, especially reconnaissance or lateral movement involved in an active attack – it’s the full-dimensional view that is required to expose the entire range of the physical and virtual attack surface. Of course, spotting command and control communication and exfiltration represents advancement of the attack and can be cleverly disguised and difficult to spot. Exfiltration is typically far into the attack process and may be at a point too late to mitigate or stop the theft or damage.

Second, signs of reconnaissance and lateral movement often must come from multiple sources to show progression and enhance the accuracy and speed of the findings. Data will likely come from multiple tools and directly from one or more spots in the network or extended network. For these purposes, the traffic is usually taken out-of-band through a SPAN or RSPAN ports, or network TAP, or a virtual TAP – each option can be selected based on infrastructure architecture.

Third, data context can be critical for correlations and to provide improved insights and efficient “polarized” traffic analysis. Sometimes data delivered through a vendor’s API may lack details necessary to boost fidelity through a better understanding of context. Any historical details may also be valuable to help increase the accuracy of an alert against the baseline behavior that was defined as normal.

Sometimes full packet data is necessary, but often header information or extracted meta data is sufficient. Often decryption of encrypted traffic is essential, but ideally this can be done according to policy to avoid compliance issues, expose data to insiders or produce some kind of liability. Again, having a variety of options is important, as sometimes header information or an extraction to meta data is sufficient.

A fourth aspect is speed and capacity. Getting data in real time or near-real-time is important. Batch uploading of data from solutions may be too slow to stop an attack at the first opportunity. One value of integrating and correlating data within a SIEM or alternative processing center is that small signals or data that alone may seem inconsequential can be compounded to provide better insight and higher accuracy. Something that might have been overlooked does not have to slip through. This requires having all relevant data available quickly and in the same time frame. And, the capacity optimization is necessary for data traffic that can spike above the threshold of computing processing and network interface speeds. These spikes can result in the SIEM or other tools inefficiency for inspection of the information and correlate to the entire security chain reaction.

Flexibility is also important, as the network is always changing. Every day likely sees new users, devices, applications, assets and attack vectors or threats. Having a way to get the exact visibility needed without significant rearranging of the infrastructure is helpful. Ideally, cabling will not have to be modified or routing infrastructure changed.

New technologies, such as XDR or some next generation of SIEM, offer considerable advances in closing the gap on attacks and attackers. These are all contingent on visibility—the ability to immediately and accurately uncover the work of an attacker.

In rethinking the SIEM or bringing in a new center to integrate, correlate and analyze data from across the network, consider all the aspects of visibility as well. Visibility needs to go hand in hand with these advanced analytical solutions and the further use of machine learning and artificial intelligence to sustain and improve the defense of the wide attack surface in the ever-changing dynamic cyber battlefield.

Don't miss