
OpenAI has launched Daybreak, a new cybersecurity initiative that integrates frontier artificial intelligence (AI) modeling capabilities with Codex Security. This allows organizations to identify and patch vulnerabilities before attackers find a way to exploit the same issue.
“Daybreak combines the intelligence of OpenAI models, the scalability of Codex as an agent harness, and partners across the security flywheel to help make the world safer for everyone,” the AI startup said. “Defenders can incorporate secure code reviews, threat modeling, patch validation, dependency risk analysis, detection, and remediation guidance into their daily development loops, making their software more resilient from the start.”
Similar to Anthropic’s Mythos, the idea is to leverage AI to tip the balance in favor of defenders, allowing them to detect and address security issues before they are discovered by bad actors. Access to the tool is currently tightly controlled, and OpenAI encourages interested organizations to request a vulnerability scan or contact its sales team.
Daybreak leverages Codex Security to build editable threat models for specific repositories that focus on realistic attack paths and high-impact code, identifying and testing vulnerabilities in an isolated environment, and recommending fixes.
This effort is built on the foundation of three models: GPT-5.5 (with standard protections for general-purpose use), GPT-5.5 with Trusted Access for Cyber (for verified defense work in permissive environments), and GPT-5.5-Cyber (a permissive model for red teaming, penetration testing, and controlled verification).
OpenAI said several leading companies, including Akamai, Cisco, Cloudflare, CrowdStrike, Fortinet, Oracle, Palo Alto Networks and Zscaler, have already integrated these capabilities under its Trusted Access for Cyber initiative, adding that it will work with industry and government partners to deploy “more cyber-enabled models” in the future.
This deployment comes as AI tools reduce the time it takes to discover potential security issues that would otherwise go unnoticed, turning what once took a lot of time and effort into a much shorter period of time. As a result, the patching process can be difficult to maintain even under ideal conditions.
In early March of this year, HackerOne suspended its bug bounty program, citing the shifting balance between discovering vulnerabilities and the ability of open source maintainers to address them. This is believed to be because AI-enabled research has led to an increase in the amount of new defects and the speed at which they are identified.
This also had the side effect of so-called triage fatigue. Project managers have to sift through a large number of vulnerability reports, some of which may be plausible, but may be completely hallucinated by the AI model.
As AI lowers the barrier to discovering security flaws, companies like Anthropic, Google, and OpenAI are increasingly positioning AI security agents as a new operational layer to address remediation bottlenecks and protect digital infrastructure from potential exploits.
Security researcher Himanshu Anand said in a post published last week that “the 90-day disclosure policy is dead” because large-scale language models (LLMs) compress disclosure and exploitation timelines to near zero.
“If 10 unrelated researchers can discover the same bug in six weeks, and an AI can turn a patch difference into a working exploit in 30 minutes, what exactly does a 90-day grace period protect? Nobody,” Anand said.
Source link
