Arthur Shield tackles safety and performance issues in large language models
Arthur introduced a powerful addition to its suite of AI monitoring tools: Arthur Shield, a firewall for large language models (LLMs).
This patented new technology enables companies to deploy LLM applications like ChatGPT more safely within an organization, helping to identify and resolve issues before they become costly business problems — or worse, result in harm to their customers.
Recent advancements in large language models from OpenAI, Google, Meta, and others have spurred a rush of companies across industries to integrate LLMs into their operations. However, along with the incredible power of this new technology come significant risks and safety issues.
Arthur Shield enables companies to deploy LLMs more safely by detecting and then blocking key risks, such as: PII or sensitive data leakage, toxic, offensive, or problematic language generation, and outputting incorrect responses, also known as “hallucinations.”
The platform also detects and stops malicious user prompts — including attempts to get the model to generate a response that would not reflect well on the business, efforts to get the model to return sensitive training data, or attempts to bypass safety controls.
“LLMs are one of the most disruptive technologies since the advent of the Internet. Yet, as with all new technologies, these advancements pose numerous potential risks to both companies and the public,” said Adam Wenchel, CEO of Arthur. “Arthur has created the tools needed to deploy this technology more quickly and securely, so companies can stay ahead of their competitors without exposing their businesses or their customers to unnecessary risk.”
Arthur is currently used by industry leaders like Humana, the Department of Defense (DoD), Expel, Axios HQ, and three of the top five US banks to address critical issues faced by AI developers, such as accuracy, explainability, and fairness.
By leveraging Arthur’s platform, companies across sectors have not only been able to protect their customers and ensure their AI complies with strict regulatory requirements, they have also saved hundreds of millions of dollars in operating expenses while achieving significant model-driven revenue growth.