OpenAI expands its cyber defense program with GPT-5.4-Cyber for vetted researchers
Defending critical software has long depended on the ability to find and fix vulnerabilities faster than attackers can exploit them. OpenAI is expanding a program designed to give professional defenders prioritized access to AI tools built for that purpose.
The company is scaling its Trusted Access for Cyber (TAC) program to thousands of verified individual defenders and hundreds of teams responsible for defending critical software. Alongside that expansion, OpenAI is releasing GPT-5.4-Cyber, a version of GPT-5.4 fine-tuned specifically for defensive cybersecurity work.

What GPT-5.4-Cyber does differently
GPT-5.4-Cyber has a lower refusal boundary for legitimate cybersecurity work than standard GPT-5.4. It adds capabilities aimed at advanced defensive workflows, including binary reverse engineering. That capability lets security professionals analyze compiled software for malware potential, vulnerabilities, and security robustness without needing access to source code.
OpenAI introduced TAC in February 2026 with automated identity verification for individuals and a limited partnership arrangement for organizations seeking access to more cyber-permissive models. The expanded program adds additional tiers of access for users who authenticate themselves as cybersecurity defenders.
Customers in the highest tiers get access to GPT-5.4-Cyber. Permissive and cyber-capable models may come with limitations around no-visibility uses, particularly Zero-Data Retention (ZDR). That constraint applies especially to developers and organizations accessing OpenAI models through third-party platforms where OpenAI has less direct visibility into the user, the environment, or the purpose of the request.
How access works
The access process runs through two paths. Individual users can verify their identity at chatgpt.com/cyber. Enterprises can request trusted access for their team through an OpenAI representative.
Customers approved through either path gain access to model versions with reduced friction around safeguards that might otherwise trigger on dual-use cyber activity. Approved uses include security education, defensive programming, and responsible vulnerability research. TAC customers who want to go further and authenticate as cyber defenders can express interest in additional access tiers, including GPT-5.4-Cyber.
Deployment of the more permissive model is starting with a limited, iterative rollout to vetted security vendors, organizations, and researchers.
The access model behind the program
OpenAI’s approach to cyber access rests on three principles. The first is democratized access: using objective criteria and methods, including strong KYC and identity verification, to determine who can access more advanced capabilities, with the goal of making those capabilities available to legitimate actors of all sizes, including those protecting critical infrastructure and public services.
The second is iterative deployment. OpenAI updates models and safety systems as it learns more about the benefits and risks of specific versions, including improving resilience to jailbreaks and adversarial attacks.
The third is ecosystem resilience. That includes targeted grants, contributions to open-source security initiatives, and tools like Codex Security.
Codex Security’s track record so far
Codex Security launched in private beta six months ago and moved to a research preview earlier in 2026. It automatically monitors codebases, validates issues, and proposes fixes. Since its launch, Codex Security has contributed to over 3,000 critical and high fixed vulnerabilities, along with lower-severity findings across the ecosystem.
OpenAI also reached over 1,000 open-source projects through Codex for Open Source, which provides free security scanning.
The dual-use problem
OpenAI acknowledges that cyber capabilities are inherently dual-use, meaning risk is not defined solely by the model. It also depends on the user, the trust signals around them, and the level of access they receive. The company’s position is that broad access to general models with safeguards can coexist with more granular controls for higher-risk capabilities, supported by stronger verification, clearer signals of intent, and better visibility into use.
Threat actors are also experimenting with AI. OpenAI notes that sophisticated attackers are already eliciting stronger capabilities from existing models by using more test-time compute, which means safeguards cannot wait for a single future capability threshold to be the trigger for action.

Webinar: The IT Leader’s Guide to AI Governance