Sonet.io blocks sensitive data from being pasted into ChatGPT

Sonet.io announced support for data loss protection, monitoring and observability capabilities for the generative AI era.

Sonet.io will be able to detect when sensitive data is downloaded, uploaded, copied, pasted or typed into generative AI tools, allowing organizations to realize the efficiency gains of generative AI tools, without compromising the security of corporate data and IP.

Organizations can grant access to these tools only through Sonet.io and then block prohibited actions from taking place and see exactly what actions users have taken, even replaying back recordings of the violations to determine exactly what happened.

“Generative AI tools can provide significant increases in productivity, but also come with significant risk as workers inadvertently input sensitive data into them,” said Sonet.io CEO Dharmendra Mohan.

“Some organizations have addressed this by completely shutting down access to generative AI, putting them at a disadvantage against organizations that are increasing worker efficiency with these tools. By restricting the ability to input sensitive data, organizations can realize the cost-benefits of AI tools, without neglecting data security,” added Mohan.

Sonet.io will allow admins to set fine-grained content inspection policies that block anyone inputting sensitive data into generative AI tools. While non-confidential content is allowed, data that fits sensitive patterns, such as credit cards, personally identifiable information (PII), keys, and source code, can be blocked from being put into the tools.

Real-time notifications can be sent to internal admins to ensure any threats are dealt with immediately, and screen-recordings can be used for forensics.

Share this