What happens to enterprise data when GenAI shows up everywhere

Generative AI is spreading across enterprise workflows, shaping how employees create, share, and move information between systems. Security teams are working to understand where data ends up, who can access it, and how its use reshapes security assumptions. This article explores how GenAI is increasing data exposure, creating new threats, and outpacing existing policies, controls, and testing.

genAI data exposure

GenAI is exposing sensitive data at scale

Sensitive data is everywhere and growing fast. A new report highlights how unstructured data, duplicate files, and risky sharing practices are creating serious problems for security teams. The findings show how generative AI tools like Microsoft Copilot are adding complexity, while old problems like oversharing and poor data hygiene continue to create exposure.

AI moves fast, but data security must move faster

Generative AI is showing up everywhere in the enterprise, from customer service chatbots to marketing campaigns. It promises speed and innovation, but it also brings new and unfamiliar security risks. As companies rush to adopt these tools, many are discovering that their data protection strategies are not ready for the challenges AI creates.

GenAI is fueling smarter fraud, but broken teamwork is the real problem

Generative AI has made fraud faster, cheaper, and harder to spot. Spoofed logins, vendor impersonation, invoice fraud, and even deepfakes are now combined in sequences that mimic normal workflows. Most defenses remain tied to a single system. Training, manual verification, and email filtering all continue to fail when attacks span multiple platforms. Nearly nine in ten organizations said at least one of their safeguards broke down during a major incident.

Employees race to build custom AI apps despite security risks

There is a 50% increase in GenAI platform usage among enterprise end-users, driven by growing employee demand for tools to develop custom AI applications and agents. Despite an ongoing shift toward safe enablement of SaaS GenAI apps and AI agents, the growth of shadow AI, unsanctioned AI applications in use by employees, continues to compound potential security risks, with over 50% of all current app adoption estimated to be shadow AI.

Your employees uploaded over a gig of files to GenAI tools last quarter

In Q2 2025, an analysis of 1 million GenAI prompts and 20,000 uploaded files across more than 300 GenAI and AI-powered SaaS applications found that sensitive data is being exposed through GenAI tools, something many security leaders fear but find difficult to measure. 22% of files and 4.37% of prompts contain sensitive information, this includes source code, access credentials, proprietary algorithms, M&A documents, customer or employee records and internal financial data.

GenAI is everywhere, but security policies haven’t caught up

Nearly three out of four European IT and cybersecurity professionals say staff are already using generative AI at work, up ten points in a year, but just under a third of organizations have put formal policies in place. 63% are extremely or very concerned that generative AI could be turned against them, while 71% expect deepfakes to grow sharper and more widespread in the year ahead. Despite that, only 18% of organizations are putting money into deepfake-detection tools, a significant security gap. This disconnect leaves businesses exposed at a time when AI-powered threats are evolving fast.

We know GenAI is risky, so why aren’t we fixing its flaws?

Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t keeping up with the risks. Only 66% of organizations are regularly testing their GenAI-powered products, leaving a significant portion unprotected. 48% of respondents believe a “strategic pause” is needed to recalibrate defenses against genAI-driven threats. But that pause isn’t coming.

Users lack control as major AI platforms share personal info with third parties

As generative AI becomes a growing part of everyday life, users are often unaware of what personal data these tools collect, how it’s used, and where it ends up. Researchers analyzed leading AI platforms across 11 subcategories in three key areas: how user data is utilized in model training, the transparency of each platform’s privacy practices, and the scope of data collection and third-party sharing.

Many rush into GenAI deployments, frequently without a security net

70% percent of organizations view the pace of AI development, particularly in GenAI, as the leading security concern related to its adoption, followed by lack of data integrity (64%) and trustworthiness (57%). Many organizations are already adopting GenAI, with a third of respondents indicating it is either being integrated or is actively transforming their operations.

Why CISOs are watching the GenAI supply chain shift closely

In supply chain operations, GenAI is gaining traction. Many security leaders remain uneasy about what that means for data protection, legacy tech, and trust in automation. 97% are already using some form of GenAI. But only a third are using tools designed specifically for supply chain tasks. And nearly half (43%) say they worry about how their data is used or shared when applying GenAI. Another 40% don’t trust the answers it gives.

94% of firms say pentesting is essential, but few are doing it right

Organizations are particularly struggling with vulnerabilities within their GenAI LLM web apps. 95% firms have performed pentesting on these apps in the last year with 32% of tests finding vulnerabilities warranting a serious rating. Of those findings, a mere 21% of vulnerabilities were fixed, with risks including prompt injection, model manipulation, and data leakage.

GenAI turning employees into unintentional insider threats

The amount of data being shared by businesses with GenAI apps has exploded, increasing 30x in one year. The average organization now shares more than 7.7GB of data with AI tools per month, a massive jump from just 250MB a year ago. This includes sensitive data such as source code, regulated data, passwords and keys, and intellectual property, significantly increasing the risk of costly breaches, compliance violations, and intellectual property theft. 75% of enterprise users are accessing applications with GenAI features, creating a bigger issue security teams must address: the unintentional insider threat.

8 steps to secure GenAI integration in financial services

GenAI offers financial services institutions enormous opportunities, particularly in unstructured dataset analysis and management, but may also increase security risks. GenAI can organize oceans of information and retrieve insights from it that you can use to improve business operations, maximize your markets, and enhance the customer experience. Those GenAI-analyzed datasets can turn up information about fraud, threats, and risks, which present remarkable security opportunities.

One in ten GenAI prompts puts sensitive data at risk

Despite their potential, many organizations hesitate to fully adopt GenAI tools due to concerns about sensitive data being inadvertently shared and possibly used to train these systems. In the vast majority of cases, employee behavior when using GenAI tools is straightforward. Users commonly ask to summarize a piece of text, edit a blog, or write documentation for code. However, 8.5% of prompts are a concern and put sensitive information at risk.

Malicious actors’ GenAI use has yet to match the hype

Generative AI has helped lower the barrier for entry for malicious actors and has made them more efficient, i.e., quicker at creating convincing deepfakes, mounting phishing campaigns and investment scams. For now, though, it hasn’t made the attackers “smarter” nor completely transformed cyber threats.

Read more:

Stay updated with the latest cybersecurity news. Subscribe here!

Don't miss