PrivateGPT enables users to share only necessary information with OpenAI’s chatbot
Private AI launched PrivateGPT, a new product that helps companies safely leverage OpenAI’s chatbot without compromising customer or employee privacy.
“Generative AI will only have a space within our organizations and societies if the right tools exist to make it safe to use,” says Patricia Thaine, CEO of Private AI.
PrivateGPT is an AI-powered tool that redacts 50+ types of Personally Identifiable Information (PII) from user prompts before sending it through to ChatGPT – and then re-populates the PII within the answer for a seamless and secure user experience. Entities can be toggled on or off to provide ChatGPT with the context it needs to successfully answer the query or privacy mode can be turned off entirely if no sensitive information needs to be filtered out.
“LLMs are not excluded from data protection laws like the GDPR, HIPAA, PCI DSS, or the CPPA. The GDPR, for example, requires companies to get consent for all uses of their users’ personal data and also comply with requests to be forgotten,” says Thaine. “By sharing personal information with third-party organizations, they lose control over how that data is stored and used, putting themselves at serious risk of compliance violations.”
To compound the problem, OpenAI was recently the center of a major breach of privacy when a bug released users’ chat history, potentially exposing all kinds of personal information like addresses, names, and phone numbers. With PrivateGPT, only necessary information gets shared with OpenAI, and companies can rest assured that besides remaining compliant with data protection regulations, no personal information is subject to unwanted data exposure due to bugs or data breaches.
Data privacy concerns are not exclusive to ChatGPT. Generative AI startups and enterprise software companies using Large Language Models are facing the same issues. “We understand the significance of safeguarding the sensitive information of our customers,” says Sunil Rao, CEO of Tribble, a company that has built a platform for automating go-to-market functions that recently partnered with Private AI. “With Private AI, we can build Tribble on a bedrock of trust and integrity, while proving to our stakeholders that using valuable data while still maintaining privacy is possible.”
Private AI uses state-of-the-art technology to detect, redact, and replace over 50 types of PII, PHI, and PCI in 49 languages with unparalleled accuracy. Plus, it’s all deployed within the customers’ own environment so that PII is never shared with any third-party, not even Private AI. Their core de-identification product is currently being used by customers of all sizes, from startups to large enterprises.
“The last few years have proven that data is the most valuable currency,” says Priyanka Mitra, Partner at M12, Microsoft’s venture arm and Private AI investor. “PrivateGPT is just one more example of Private AI’s consistent ability to develop industry-leading tools for data privacy.”
“Our PrivateGPT solution is just one more step into our commitment to helping companies leverage their data without compromising compliance and consumer trust,” says Thaine.