You can’t audit how AI thinks, but you can audit what it does

In this Help Net Security interview, Wade Bicknell, Head, IT Security & Operations, CFA Institute, discusses how CISOs can use AI while maintaining security and governance. He explains why AI presents both defensive opportunities and emerging risks, and how leadership must balance innovation, control, and accountability in cybersecurity.

CISO AI security governance

How should a CISO be thinking about using, or guarding against, AI/ML systems internally (for fraud detection, threat hunting)?

It’s a dual-sided challenge. Many emerging companies are focused on integrating AI into their defensive capabilities, from fraud detection to threat hunting, but CISOs must also recognize that adversaries are doing the same. 

We’re still in the early days of AI being used offensively, but that stage is approaching quickly. AI is a force multiplier, it can accelerate defense, but it can also amplify malicious creativity when paired with human intent. 

This means establishing internal boundaries for experimentation, protecting the data fed into AI models, and creating early governance frameworks around use. In short, we must learn to defend with AI while simultaneously defending against it. 

How can an organization ensure AI tools are auditable, explainable, and robust against adversarial attacks?

Traditional audit and oversight models assume linear, explainable logic, but AI doesn’t think in straight lines. It’s like assigning a complex task to an autonomous intern who decides how to get it done. You see the outcome, but not always the process. 

That’s the core challenge, we often can’t unpack how an AI reached a decision. The best we can do is validate that its outputs align with our intent and remain within our ethical and operational boundaries. 

Organizations should combine technical transparency (model documentation, data lineage) with operational oversight, human review boards, AI red-teaming, and continuous testing for drift or adversarial manipulation. 

We may never be able to audit AI’s thought process, but we can, and must, continuously audit its outcomes and impact. 

What standards or controls should be in place when AI is used in investment operations or member services?

Any organization allowing AI to make investment or customer-facing decisions without human oversight is accepting significant risk. AI lacks moral and contextual awareness, it doesn’t intuit “don’t harm,” “don’t mislead,” or “act fairly.” 

In my own experience using AI for coding and analytics, I’ve seen how quickly it can “forget” prior guardrails and revert to earlier behavior. That’s why in financial or member-service contexts, AI must operate under strict governance that includes: 

  • Human-in-the-loop decisioning for all high-impact actions
  • Defined ethical boundaries for what AI can decide or recommend
  • Bias and performance testing at regular intervals
  • Accountability for AI outputs, including override mechanisms

We can’t assume AI understands the intent behind our requests, we must encode that intent and continuously verify it’s being upheld. 

How should firms prepare for audits of AI systems where transparency might conflict with intellectual property constraints or model complexity?

This is one of the most overlooked risk areas. Many organizations don’t understand what their employees are feeding into AI systems with sensitive data, code, or proprietary logic can leak easily through everyday use of generative tools. 

Until a major event forces the issue, many will underestimate the exposure. The best preparation is proactive: 

  • Data classification policies defining what may or may not enter AI systems
  • Internal model registries to track usage, inputs, ownership, and updates
  • Third-party attestations or secure audit mechanisms that protect IP while allowing sufficient transparency
  • Employee education on the risks of AI data misuse or leakage

By doing these things, and documenting that you do them, you achieve the true intent of an audit: proving that controls are in place. 

When governance, transparency, and accountability are embedded in operations, you don’t just prepare for audit, you make your organization audit-proof by design. 

In anti-money laundering or fraud detection, how do you ensure explanations don’t reveal sensitive data while remaining actionable?

In AML and fraud detection, timing is everything. Explanations are only useful if they help prevent a transaction before it occurs. AI will supercharge both payments and fraud, so prevention must shift to the front of the process.

At the same time, model explainability must balance regulatory and privacy demands. The most effective approach is layered disclosure, provide investigators with enough insight (patterns, behavioral anomalies, clusters) to act, but without revealing unnecessary personal or transactional data. 

The goal is actionable transparency, giving teams the information they need to act decisively and ethically, without compromising privacy or regulatory integrity.

Don't miss