Building a stronger SOC through AI augmentation

In this Help Net Security interview, Tim Bramble, Director of Threat Detection and Response at OpenText, discusses how SOC teams are gaining value from AI in detecting and prioritizing threats. By learning what “normal” looks like across users and systems, AI helps surface anomalies that rules-based methods often miss.

Bramble explains that the greatest value comes from human-AI collaboration, with automation providing insights and analysts applying the judgment and context needed for action.

AI SOC

Where exactly are SOCs seeing the most immediate and tangible value from AI? Can you share a real-world example where AI meaningfully improved detection or response?

SOCs are buried in billions of events every day, and if they only look for specific, known signatures, they will miss unknown threats. Attackers often operate within the bounds of what looks “normal,” using valid credentials, familiar workflows and lateral movement that blends in. This is where AI delivers immediate value: spotting subtle deviations in behavior that indicate something may be wrong, even when traditional rules don’t alert.

Applied this way, AI doesn’t just scan for known threats; it’s learning and establishing what’s normal for each user, system or workload, and flagging meaningful deviations from that baseline. For example, if a user who typically accesses a few documents suddenly begins downloading large volumes of sensitive content or starts using tools they’ve never interacted with before, AI can flag that activity as anomalous, even in the absence of known indicators.

For SOC teams, the payoff is speed, clarity and scale. Rather than combing through low-fidelity alerts or relying on fixed rules, analysts get a prioritized view of high-risk behavior based on what truly deviates from the norm. This approach is especially valuable for lean teams, surfacing what matters faster so humans can focus on validating intent and taking action.

Which SOC functions are best suited for AI? For example, alert triage, anomaly detection, threat hunting, or incident response? Are there any use cases where AI has consistently underperformed compared to traditional methods or human analysts?

AI is strongest in areas where scale, speed and noise overwhelm humans. Anomaly detection is a prime example: by continuously learning what normal behavior looks like across users, systems and workflows, AI can surface subtle deviations that rules or signatures may miss. Combining these insights with risk scoring and correlation of behavioral and binary signals offers analysts higher-quality threat threads by automating aspects of threat hunting and alert triage. This can reduce false positives, freeing up analysts to instead focus on deeper, higher-value investigations.

Where AI underperforms is typically dependent on how it is applied. For example, supervised machine learning models trained on labeled data excel at classifying activity that is known to be malicious but fail to detect novel threats. In this context, unsupervised machine learning models that continuously learn from new behaviors are more suitable for adapting to new situations and finding advanced novel threats such as those from insiders. A human adversary with valid credentials may appear legitimate, unless the system is constantly updating its understanding of what’s “normal.”

That’s why constantly evaluated, self-learning models are so important. They provide useful AI outputs even as behaviors continually shift. At the same time, human judgment is still required in the SOC, especially when it comes to intent. While AI can highlight the “what” and “when” of a situation, humans still need to answer the “why” to determine whether activity is malicious or legitimate.

How concerned should SOC teams be about attackers poisoning models or using adversarial inputs to evade AI-driven defenses? Have you encountered real-world examples of this yet?

Model poisoning and adversarial inputs are risks. Any system that relies on data can be manipulated. Attackers have already shown they can manipulate SIEM thresholds or train spam filters to infiltrate systems, and we can expect AI tools to be next.

The right approach is to incorporate AI as just one piece of a multilayered defense. AI has transformative capabilities in the SOC, but it should not be treated as infallible. For example, high-stakes actions such as quarantining content or suspending a user account should still involve human oversight. As with any control, overreliance is the biggest vulnerability; the goal is to combine AI’s speed and pattern detection with human decision-making and context.

AI agents offer promise for detecting and remediating threats. Human oversight will be required while agentic AI approaches establish the trust necessary to be allowed to operate completely autonomously.

In practice, how do you see the relationship between human analysts and AI evolving in the SOC? Are there specific tasks where humans still must remain in the loop, no matter how advanced the AI becomes?

The relationship between analysts and AI focuses less on replacement and more on augmentation. Threat actors today are often deliberate, moving slowly and patiently, sometimes over weeks or months, and leaving only faint traces of their presence. AI is essential for catching these signals early enough to respond.

Even so, humans will always remain central to security operations, as they bring context that AI cannot replicate. They understand business priorities, risk tolerance, and the broader impact of an incident. Put another way, human creativity will be at the center of threat campaigns for the foreseeable future, aided increasingly by AI tools. Maintaining human-machine partnerships is essential to detecting these threats.

The best outcomes result when AI reduces the noise. Instead of forcing analysts to sift through billions of events, AI can provide a prioritized list of the riskiest behaviors and explanations of why they were flagged. This gives humans the space to focus on investigations, response and strategy, not just triage.

For SOC teams considering AI investments, what practical steps should they take to ensure successful deployment and measurable outcomes?

The biggest misconception is that AI can be dropped into any environment and instantly deliver results. For behavioral analytics, as an example, successful deployment requires baselining: understanding what normal behavior looks like across your organization so AI systems can reliably detect what is abnormal. Without that baseline, results will fall short. Rather than being a panacea, LLMs offer specific value in summarizing activity, putting it in context and offering recommendations. Applied generically, LLMs deliver questionable value and can even obscure results.

A practical approach to incorporating AI is starting small and targeted. Rather than applying AI everywhere at once, security teams should begin with high-signal areas like identity and access data that are most related to threats of concern. From there, it can become easier to expand into additional telemetry sources and begin layering machine learning across multiple dimensions.

Just as important is establishing a measurement strategy. Teams need to set goals from the start, such as reducing false positives by a set percentage or cutting mean time to detect by a defined interval. These benchmarks ensure that AI investments are tied to outcomes that matter, from performance metrics to analyst productivity and SOC efficiency.

Finally, implementation is successful when it fits seamlessly with existing teams. AI should blend into the analyst workflow, not add friction. Whether it’s helping junior analysts prioritize alerts or enabling faster handoffs during triage, successful deployments treat AI as a force multiplier that simplifies analyst work rather than replacing it.

Don't miss