Who’s guarding the AI? Even security teams are bypassing oversight

Even security teams, the ones responsible for protecting the business, are adding to AI-related risk. A new survey by AI security company Mindgard, based on responses from over 500 cybersecurity professionals at RSAC 2025 Conference and Infosecurity Europe 2025, found that many security staff are using AI tools on the job without approval.

shadow AI risk

Al tools usage by security teams (Source: Mindgard)

This growing use of unapproved AI, often called shadow AI, is becoming a major blind spot inside the teams tasked with defending the organization. Similar to shadow IT, this kind of unofficial use goes around standard security checks. But the risks are higher with AI. These tools can process sensitive code, internal documents, and customer data, increasing the chances of leaks, privacy issues, and compliance violations.

According to the survey, 86% of cybersecurity professionals say they use AI tools, and nearly a quarter do so through personal accounts or unapproved browser extensions. Seventy-six percent believe their coworkers in cybersecurity are also using AI, often to help write detection rules, create training content, or review code.

The kind of data going into these tools adds to the risk. About 30% of respondents said internal documents and emails are being entered into AI systems. Roughly the same number admitted customer data or other confidential business information was also being used. One in five said they’ve entered sensitive information themselves, and 12% weren’t sure what kind of data was being submitted at all.

“Here’s the reality: any upload of sensitive IP to third-party SaaS, whether a code repository, file-sharing tool, or AI assistant, introduces risk. But panic isn’t the solution; governance is. Policies, education on AI data handling, and consistent SaaS controls can make generative AI as secure as other enterprise cloud services we already trust,” Steve Wilson, Project Co-Chair of the OWASP GenAI Security Project, told Help Net Security.

“If we’re rethinking the security perimeter, and we need to, shadow AI isn’t our biggest issue. The real perimeter crisis today centers on identity. Compromised credentials, insider threats, and AI-powered attackers leveraging deepfake spearphishing and AI-accelerated exploits are where our focus must shift,” Wilson added.

Oversight hasn’t kept up. Only 32% of organizations are actively monitoring AI use. Another 24% depend on informal checks, like surveys or manager reviews, which often miss what’s really happening. Fourteen percent said no monitoring is in place at all, meaning some companies are flying blind when it comes to AI risk.

The survey also shows that many organizations aren’t clear on who owns AI risk. Thirty-nine percent of respondents said no one is officially in charge. Another 38% said it falls to the security team, while smaller numbers pointed to data science, executive leadership, or legal and compliance. This mix of answers highlights the need for better coordination across teams and a clear plan for who is responsible.

Don't miss