AI chatbots are sliding toward a privacy crisis
AI chat tools are taking over offices, but at what cost to privacy? People often feel anonymous in chat interfaces and may share personal data without realizing the risks. Cybercriminals see the same opening, and it may only be a matter of time before information shared in an AI chatbot conversation ends up in a major data leak.

When workplace tools go unchecked
Experts warn that users should stay alert when using platforms such as ChatGPT or Gemini, since what seems like a simple exchange can still leave a lasting data trail. Before sharing personal or sensitive details, it is worth remembering that these conversations may be stored and used to train future models unless the user requests otherwise.
These risks are already visible inside companies. Concentric AI found that GenAI tools such as Microsoft Copilot exposed around three million sensitive records per organization during the first half of 2025. Much of this happened because employees used AI tools outside approved systems, leaving internal data exposed and poorly monitored.
Some reports indicate that as much as 80% of AI tools used by employees operate without oversight from IT or security teams. According to Harmonic Security, enterprises upload roughly 1.3 gigabytes of files to GenAI tools every quarter, and about 20% of those files include sensitive data.
As a result, most organizations now see GenAI as a leading IT concern, with the main issues involving data leaks and model manipulation.
From private prompts to public results
The problem reaches beyond internal company systems. Research shows that some of the most used AI platforms collect sensitive user data and share it with third parties. Users have little visibility into how their information is stored or reused, leaving them with limited control over its life cycle.
This leads to an important question about what happens to the information people share with chatbots.
In one case, shared ChatGPT chats appeared in Google search results after users publicly shared links that were later indexed. OpenAI has since removed the feature that made those links searchable, though private sharing remains available. That wasn’t an isolated case. Hundreds of thousands of Grok conversations were also found in Google search results.
A recent study set out to see how much of that information chatbots remember and how accurately they can describe what they’ve learned about a user. The findings showed how easily personal details surface during ordinary exchanges and raised concerns about how long this data lasts or how it’s reused.
The rise of shadow AI
One of the more worrying trends in business is the growing use of shadow AI, where employees turn to unapproved tools to complete tasks faster. These systems often operate without company supervision, allowing sensitive data to slip into public platforms unnoticed.
Most employees admit to sharing information through these tools without approval, even as IT leaders point to data leaks as the biggest risk. While security teams see shadow AI as a serious problem, employees often view it as low risk or a price worth paying for convenience.
“We’re seeing an even riskier form of shadow AI,” says Tim Morris, Chief Security Advisor at Tanium, “where departments, unhappy with existing GenAI tools, start building their own solutions using open-source models like DeepSeek.”
Soon after its release, DeepSeek quickly found its way into both personal and professional use. Early on, experts began pointing out the privacy and security risks linked to its use, and the U.S. Navy, for instance, has prohibited its personnel from using it for work-related tasks. What’s particularly alarming is that user data may be stored on servers in China, where different laws on access and data oversight apply.
Accountability begins with awareness
Companies need to do a better job of helping employees understand how to use AI tools safely. This matters most for teams handling sensitive information, whether it’s medical data or intellectual property. Any data leak can cause serious harm, from damaging a company’s reputation to leading to costly fines.
“AI governance only works if it’s actionable,” said Brooke Johnson, Chief Legal Counsel and SVP of HR and Security at Ivanti. “First, acknowledge that AI use is likely taking place across your organization, whether sanctioned or not. Conduct assessments to understand what tools are being used and which ones meet your standards.
Then, create pragmatic policies on when and how AI can be applied. Equip teams with vetted platforms that are easy to access and secure, reducing reliance on unsanctioned alternatives.”
