Be careful what you share with GenAI tools at work
We use GenAI at work to make tasks easier, but are we aware of the risks? According to Netskope, the average organization now shares more than 7.7GB of data with AI tools per month, and 75% of enterprise users are accessing applications with GenAI features.
The dark side of GenAI
The fact that 89% of organizations have zero visibility into AI usage reveals a gap in oversight and control. On top of that, 71% of GenAI tools are accessed with personal, non-work accounts. Even when company accounts are used, 58% of logins skip Single Sign-On (SSO). This means security teams have no view of the tools employees use or the information being shared.
Samsung employees unintentionally exposed highly confidential data while using ChatGPT to assist with their tasks, leading to the company banning GenAI tools.
DeepSeek saw massive downloads despite warnings that it collects user data on secure servers located in China, and uses it for various purposes.
Shadow AI
An even bigger problem is shadow AI — the use of AI tools in an organization without official approval. AI use leaves fewer traces and can happen on personal devices or browsers. Entering sensitive data into AI tools can result in that information being stored on external servers beyond the company’s control, increasing the risk of data leaks and privacy breaches.
Ivanti found that 81% of office workers report they have not been trained on GenAI and 15% are using unsanctioned tools.
“While no single industry is more susceptible to shadow AI risk than another, larger organizations or well-known brands are typically the most likely to face serious reputational damage from its impact,” said Steve Tait, CTO at Skyhigh Security.
Banning GenAI isn’t enough
GenAI is moving faster than most companies can keep up. Many still don’t have rules for how employees should use these tools at work.
What some companies are doing right now is completely banning the use of GenAI or blocking only certain applications. A ban alone, without understanding why something is a potential danger, is not enough and leads to shadow usage.
What organizations should do:
- Implement policies defining when GenAI tools can be used and when human oversight is required to prevent harmful errors.
- Provide staff training and regularly audit usage to ensure responsible and compliant use of GenAI tools.
- Control and restrict the type of information inputted into GenAI to protect personal data and comply with privacy laws.
Education is key. Employees should be made aware that these applications and tools are not harmless if used improperly.
The executive blind spot
Organizations can’t afford to turn their back on GenAI technology. As a result, some may feel pressured to implement tools without fully understanding how they work or what risks they carry. Saying “we use AI” might sound good on a call with investors, but it can turn into a breach, a lawsuit, or bad press.
“GenAI certainly looks to be the next major cause of an IT security evolution, and most organizations will be looking for ways to see, control, and protect their users and their data as AI becomes embedded in every process,” noted Rick Caccia, CEO at WitnessAI.
Leaders do not need to be AI experts, but they do need to ask the right questions. What data is the model using? How is it being monitored? Who is responsible if something goes wrong? Without that awareness, adopting AI is not a strategy. It is a gamble.