AI tools are everywhere, and most are off your radar
80% of AI tools used by employees go unmanaged by IT or security teams, according to Zluri’s The State of AI in the Workplace 2025 report. AI is popping up all over the workplace, often without anyone noticing. If you’re a CISO, if you want to avoid blind spots and data risks, you need to know where AI is showing up and what it’s doing across the entire organization.
What’s happening and why it matters
Organizations are using dozens, sometimes hundreds, of AI tools across different teams. These tools show up in marketing, sales, engineering, HR, and operations. But most security teams know about fewer than 20% of them. Employees often try out AI apps on their own, without approval or oversight. This leads to shadow AI, tools that operate outside the knowledge of IT and security.
When AI systems interact with sensitive internal data or generate outputs that affect business decisions, that lack of oversight becomes risky. Unchecked tools may be connecting to unknown vendors, storing data on public servers, or sharing inputs and outputs without encryption or audit trails. As AI becomes more integrated into daily tasks, the risk compounds.
Risks CISOs must treat as critical
Data leakage is one of the most pressing threats. AI tools not managed by IT may handle sensitive internal information and expose it to the outside world. In regulated industries, this can easily lead to compliance violations, especially when protected health data, financial information, or customer records are involved. Another growing concern is access sprawl.
AI platforms often create service accounts or connect via API keys. Without a central inventory, it’s easy to lose track of these credentials. That increases the attack surface, and there’s also a lack of audit trails. If data is misused or leaked, and there’s no log of how it happened, response becomes almost impossible.
What CISOs can do
-
Get visibility across all AI tools
Invest in discovery platforms that scan network, identity systems, and SaaS usage to spot AI tools in use. -
Group by risk level
Classify tools based on data access: high-sensitive access warrants greater scrutiny. Build policies accordingly. -
Enforce least privilege
Restrict rights for AI applications. Audit API keys, manage service accounts centrally, and revoke unused tokens. -
Integrate into governance frameworks
Add AI tools to your asset inventory. Require security review before approval, just as you would for SaaS applications. -
Adopt real-time alerting
Use risk scoring tied to unusual AI usage patterns. If a sensitive document is uploaded to an unknown model, flag it. -
Educate employees
Shadow AI grows fastest when staff are unaware it’s a security threat. Run awareness campaigns and set clear usage policies.