
OpenAI on Tuesday unveiled its latest flagship model, GPT-5.4-Cyber, a variant of GPT‑5.4 specifically optimized for defensive cybersecurity use cases, just days after rival Anthropic unveiled its own Frontier model, Mythos.
“The progressive use of AI will accelerate defenders, those responsible for keeping systems, data, and users safe, so they can more quickly find and fix problems in the digital infrastructure we all rely on,” OpenAI said.
In conjunction with this announcement, the artificial intelligence (AI) company announced that it is enhancing its Trusted Access for Cyber (TAC) program for thousands of certified personal defenders and hundreds of teams responsible for the security of critical software.
AI systems are dual-use in nature, allowing malicious actors to reuse technology developed for legitimate applications for their own benefit and achieve their malicious objectives. One key area of concern is that attackers could reverse fine-tuned models for software defense, detecting and exploiting vulnerabilities in widely used software before they are patched, putting users at significant risk.
OpenAI said its goal is to democratize access to its models while minimizing such abuse and strengthen security measures through planned and iterative deployment. The idea is to enable responsible use at scale as models become more sophisticated, giving defenders a head start and at the same time hardening guardrails against jailbreaks and hostile prompt injections.
“As the capabilities of our models advance, our approach is to scale our cyber defenses in lockstep – expanding access for legitimate defenders while continuing to strengthen safeguards,” the company added.
The ChatGPT maker, which launched Codex Security as a way to discover vulnerabilities, verify them, and suggest fixes, has revealed that its AI-powered application security agent was responsible for over 3,000 critical fixed vulnerabilities.
OpenAI’s limited release follows a preview of Anthropic’s Mythos, a frontier model being deployed in a controlled manner as part of Project Glasswing. The company said the model discovered “thousands” of vulnerabilities in operating systems, web browsers and other software.
“The strongest ecosystems are those that continually identify, verify, and fix security issues as they create software,” OpenAI said. “By integrating advanced coding models and agent capabilities into developer workflows, we can provide immediate, actionable feedback to developers as they build, moving security from one-off audits and static bug inventories to ongoing, tangible risk mitigation.”
Source link
