CISO 3.0: Leading AI governance and security in the boardroom

In this Help Net Security interview, Aaron McCray, Field CISO at CDW, discusses how AI is transforming the CISO role from a tactical cybersecurity guardian into a strategic enterprise risk advisor. With AI now embedded across business functions, CISOs are leading enterprise-wide governance and risk management efforts. He also shares insights on practical challenges, new skillsets, and building AI-fluent security cultures.

CISOs AI security

With AI now embedded across business functions, how does a CISO’s role evolve to oversee enterprise-wide AI governance and risk management?

The role of the CISO is no longer just the guardian of firewalls and endpoints. The role is increasingly shifting from operating as a tactical cybersecurity leader to a strategic enterprise risk advisor – now at the table advising on AI risk at the board level. New CDW research shows that 85% of IT leaders believe AI can improve cybersecurity, and nearly three-quarters are already implementing it to do just that.

As AI becomes more embedded, the modern CISO is helping shape governance frameworks that ensure AI is used responsibly and in alignment with compliance and business goals. This is a fundamental shift that my colleague, Walt Powell, refers to as “CISO 2.0 to CISO 3.0” – where the role of the CISO has shifted from being a cybersecurity steward to that of strategic leader aimed at achieving business outcomes, performing quantitative financial risk management, and advising the C-Suite and board level in the decision-making process.

What are the practical challenges CISOs face when deploying AI-driven security tools, especially around visibility, explainability, or false positives?

As advanced as AI has become, obstacles remain that require human oversight. Ensuring visibility into the AI function, which allows for outputs to be audited and broken down into easily digestible language, will be a central challenge, especially as building trust with the technology is top of mind. AI tools are incredibly powerful, but it’s important to be able to explain how they make decisions – especially in regulated industries. CISOs must balance both efficiency and accuracy as AI deployment picks up momentum, and part of this requires transparency into how outputs are created.

Additionally, false positives are a real issue. AI-driven security tools can inundate teams with alerts that turn out to be irrelevant or low-priority. When teams are constantly chasing down false alarms, it creates alert fatigue and diverts attention from genuine threats. This not only slows down response times but can also erode trust in the system itself.

Integration is another hurdle. These tools need to work with existing infrastructure, and that’s not always straightforward. Many organizations have a complex patchwork of legacy systems, cloud environments, and third-party platforms. Introducing AI into that mix requires careful planning – ensuring compatibility, managing data flows, and maintaining security across all touchpoints. If an AI tool doesn’t align with existing workflows, its effectiveness drops significantly. It’s not just about plugging in a new tool; it’s about orchestrating a seamless, secure ecosystem where AI can deliver value.

How is AI changing the skillset CISOs need, both for themselves and for their teams?

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential.

Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively.

Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce.

How should CISOs evaluate and vet third-party AI tools for security operations? What are the “red flags”?

CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments.

Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems.

Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution. If a vendor isn’t articulating how they’ll maintain, update, or evolve a tool over time – or if there is no transparency around how issues will be handled post-deployment, it puts your organization at risk. You need partners who are realistic about what their tools can do today, and who are committed to evolving with you as your needs and the threat landscape change.

What advice would you give CISOs who are trying to build an AI-fluent security culture within their organization?

Start with education. AI is creating personalized course content directed down to the individual, which addresses learning more effectively. Training modules can use adaptive learning and monitor someone’s ability to grasp and understand the content. You could even try gamifying the process by bringing in simulation strategies to target different learning styles. The resources are there, and they’re evolving fast. The key is to invest the time now to ensure your team has the foundational knowledge and hands-on experience they need to be successful in an AI-driven security landscape.

Don't miss