Please turn on your JavaScript for this page to function normally.
Researchers automated jailbreaking of LLMs with other LLMs

AI security researchers from Robust Intelligence and Yale University have designed a machine learning technique that can speedily jailbreak large language models (LLMs) in an …

Robust Intelligence collaborates with MongoDB to secure generative AI models

Robust Intelligence announced a partnership with MongoDB to help customers secure generative AI models enhanced with enterprise data. The offering combines Robust …

MITRE partners with Robust Intelligence to tackle AI supply chain risks in open-source models

MITRE is collaborating with Robust Intelligence to enhance a free tool to help organizations assess the supply chain risks of publicly available artificial intelligence (AI) …

Don't miss

Cybersecurity news