Healthcare workers regularly upload sensitive data to GenAI, cloud accounts
Healthcare organizations are facing a growing data security challenge from within, according to a new report from Netskope Threat Labs. The analysis reveals that employees in the sector are frequently attempting to upload sensitive information, including potentially protected health data, to unauthorized websites and cloud services. Among the most common destinations are AI tools like ChatGPT and Gemini.
Healthcare GenAI data policy violations
Over the past 12 months, 81% of all data policy violations in healthcare organizations involved regulated healthcare data. This includes information protected by local, national, or international laws, such as sensitive medical and clinical records. The remaining 19% of violations involved other sensitive assets such as passwords and keys, source code, or intellectual property. Many of these incidents stemmed from employees uploading data to personal cloud storage services like Microsoft OneDrive or Google Drive.
Generative AI is widely embedded in healthcare environments, with 88% of organizations reporting usage. 44% of data policy violations involving generative AI included regulated healthcare data, while others involved source code (29%), intellectual property (25%), and passwords or keys (2%). The risk is compounded by the prevalence of applications that either use personal data for model training (present in 96% of organizations) or embed generative AI features (98%).
A key issue is the use of personal GenAI accounts in the workplace. More than two-thirds of healthcare employees using these tools send sensitive data to accounts outside organizational control. This undermines visibility for security teams and limits their ability to detect or prevent potential data leaks in real time.
“Personal healthcare data is subject to some of the most significant and stringent regulatory scrutiny. Violations can lead to regulatory investigations, legal action, and substantial fines, including penalties up to €20 million under GDPR or $1.5 million per violation under HIPAA. Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” Ray Canzanese, Director of Netskope’s Threat Labs, told Help Net Security.
“It is essential that healthcare CISOs implement comprehensive data policies, enforced through DLP tools and strict access controls with ZTNA, both of which enable continuous monitoring over healthcare data. In an environment where generative AI tools become more embedded in clinical and operational workflows, it is also vital that CISOs understand the new potential attack vectors while also preventing inadvertent data loss. As healthcare professionals look to embrace these new technologies to improve their work efficiency CISOs in healthcare cannot compromise the security of patient data,” Canzanese concluded.
Data protection guardrails
Deploying organisation-approved GenAI applications to centralise GenAI usage in applications approved, monitored, and secured by the organisation, and reduce the use of personal accounts and “shadow AI”. The use of personal GenAI accounts by healthcare workers, while still high, has already declined from 87% to 71% over the past year, as organisations increasingly shift towards organisation-approved GenAI solutions.
Deploying Data Loss Prevention (DLP) policies to monitor and control access to GenAI applications and define the type of data that can be shared with them, provides an added layer of security should workers attempt risky actions. The proportion of healthcare organisations deploying DLP policies for GenAI has increased from 31% to 54% over the past year.
Deploying real-time user coaching, a tool alerting employees if they are taking risky actions. For example, if a healthcare worker attempts to upload a file into ChatGPT that includes patient names, a prompt will ask the user if they want to proceed. A separate report shows that a large majority of employees (73%) across all industries do not proceed when presented with coaching prompts.