HackerOne extends Safe Harbor protections to AI testing

HackerOne has unveiled the Good Faith AI Research Safe Harbor, a new industry framework that establishes authorisation and legal protections for researchers testing AI systems in good faith. As AI systems scale rapidly across critical products and services, legal ambiguity around testing can slow responsible research and increase risk. The new safe harbor removes that friction by giving organisations and AI researchers a shared standard to find and fix AI risks faster and with greater impact.

This announcement builds on HackerOne’s Gold Standard Safe Harbor, introduced in 2022 and widely adopted to protect good-faith security research across traditional software. Together, the two frameworks define how organisations should authorise, support, and protect research across both conventional and AI-powered systems.

AI testing often involves techniques and outcomes that don’t fit neatly into traditional vulnerability disclosure frameworks, creating legal ambiguity that can slow discovery and increase risk. The Good Faith AI Research Safe Harbor resolves this by defining Good Faith AI Research and clearly authorising responsible AI testing.

“AI testing breaks down when expectations are unclear,” said Ilona Cohen, Chief Legal and Policy Officer at HackerOne. “Organisations want their AI systems tested, but researchers need confidence that doing the right thing won’t put them at risk. The Good Faith AI Research Safe Harbor provides clear, standardised authorisation for AI research, removing uncertainty on both sides.”

Organisations that adopt the Good Faith AI Research Safe Harbor commit to recognising good-faith AI research as authorised activity. This includes refraining from legal action, providing limited exemptions from restrictive terms of service, and supporting researchers if third parties pursue claims related to authorised research. The safe harbor applies only to AI systems owned or controlled by the adopting organisation and is designed to support responsible disclosure and collaboration.

“AI security is ultimately about trust,” said Kara Sprague, CEO of HackerOne. “If AI systems aren’t tested under real-world conditions, trust erodes quickly. By extending safe harbor protections to AI research, HackerOne is defining how responsible testing should work in the AI era. This is how organisations find problems earlier, work productively with researchers, and deploy AI with confidence.”

The Good Faith AI Research Safe Harbor is available to HackerOne customers as a standalone framework that can be adopted alongside the Gold Standard Safe Harbor. Programs that adopt it can signal to researchers that AI testing is welcome, authorised, and protected, driving higher-quality testing and stronger outcomes.

More about

Don't miss