AI can flag the risk, but only humans can close the loop
In this Help Net Security interview, Dilek Çilingir, Global Forensic & Integrity Services Leader at EY, discusses how AI is transforming third-party assessments and due diligence. She explains how machine learning and behavioral analytics help organizations detect risks earlier, improve compliance, and strengthen accountability. As oversight grows, Çilingir explains why human judgment still matters in every AI-supported decision.

When a third-party breach occurs, the forensic investigation often uncovers weak points that AI could have flagged earlier. In your experience, what patterns do you repeatedly see in post-incident analysis that AI could realistically detect or prevent?
Across investigations, we repeatedly see behaviors that EDR/XDR with machine learning can flag much earlier, such as anomalous user activity at unusual hours, execution of applications atypical for a given user or role, connections to unfamiliar or previously unseen external IPs, and unusual, sustained data egress patterns (e.g., regular outbound flows to a new address).
EDR/XDR applies behavioral analytics, signal correlation, and automated response to surface these weak signals in near-real time. In our experience, organizations that have implemented these controls detect attacks earlier. So far, we have not observed major incidents among clients who run these tools, because suspicious chains of activity are interrupted before reaching impactful stages.
Can you give an example where AI helped identify a potential third-party issue before it escalated, such as unusual data transfers, financial irregularities, or behavioral red flags in communications?
In one engagement, the EDR monitoring platform raised an alert for periodic outbound network spikes, from a single workstation to a newly observed destination, which was flagged as “unusual traffic”. By performing triage, endpoint process behavior, user context, and network telemetry were correlated, confirming that the pattern was inconsistent with the user’s historical activity.
The combination of behavioral anomaly detection and cross-signal correlation led to containment of the threat, preventing data exfiltration and further escalation. Key enablers were broad endpoint coverage, AI realtime behavioral tracking, and disciplined alert review and response, illustrating that often the combination of properly implemented AI solutions and knowledgeable users lead to the desired results.
Many companies struggle with “black box” AI outputs. How can organizations maintain transparency and explainability in AI-driven third-party assessments?
Don’t rely on a black-box-style single “research agent.” Break up the due diligence process into small, auditable steps. Decide explicitly where AI adds value (e.g., entity resolution, anomaly scoring) and where deterministic logic suffices (e.g., sanctions list checks). Build an evaluation framework (“evals”) with metrics for each step, continuously comparing expected and actual outcomes.
Enforce source to result lineage by forcing AI to carry citations of sources end-to-end rather than allowing the model to decide what to show. Add guardrails so the workflow proceeds through deterministic, reviewable stages. For machine learning components, use established explainability techniques (e.g., SHapley Additive exPlanations (SHAP)) to expose feature contributions and support analyst understanding and challenge.
What new governance structures or accountability measures do you think are needed as companies start embedding AI in their vendor risk processes?
Companies embedding AI into vendor risk processes need governance structures that ensure transparency, accountability, and compliance. This includes maintaining an approved sources catalogue and requiring either the system or an analyst to validate findings and document the rationale behind them.
Data minimization should be built into the design by defining what information is always in scope, such as sanctions or embargo lists, and what is contextually relevant, while excluding protected or sensitive attributes under GDPR and configuring AI to ignore them.
Risk assessments should be tiered, calibrating the depth of checks to supplier criticality and geography to avoid unnecessary data collection for low-risk relationships while expanding scope for high-risk scenarios. Human accountability remains essential, with a named individual owning due diligence decisions while AI provides recommendations without replacing human judgment (“human-in-the-loop”).
Regulators are beginning to scrutinize the use of AI in compliance and governance. What direction do you think global regulatory bodies will take regarding AI in third-party due diligence?
Regulators are likely to allow AI use if firms establish strong controls and demonstrate effective oversight, as required by frameworks like the EU AI Act. Responsibility remains with individuals or organizations; liability does not transfer to AI itself.
While regulators may struggle to specify detailed technical rules, one clear shift is that “the data volume was too large to review” will no longer be an acceptable defense. Scalable, explainable processes with audit trails will be expected. Ultimately, firms must provide documentation, demonstrable evaluations, and human accountability, with flexibility in how they meet these outcomes.

Webinar: Redefining Attack Simulation through AI