OpenAI Launches Daybreak as AI Cybersecurity Race Collides With AI-Powered Hackers

OpenAI Launches Daybreak as AI Cybersecurity Race Collides With AI-Powered Hackers


OpenAI has launched Daybreak, a cybersecurity initiative designed to help enterprises detect vulnerabilities, secure software from the outset and respond to cyber threats faster using artificial intelligence.

The move comes as AI companies increasingly compete to dominate enterprise cybersecurity, particularly after rival Anthropic introduced its security-focused Project Glasswing, powered by the Claude Mythos Preview model.

However, OpenAI said Daybreak is built on the premise that cybersecurity should be integrated into software development from the beginning rather than relying solely on post-development fixes.

Through its AI-powered Codex Security agent, the initiative aims to reduce lengthy security investigations from hours to minutes by prioritizing high-risk vulnerabilities, generating patches, testing fixes inside repositories, and returning audit-ready evidence to organizations.

“Daybreak combines the intelligence of OpenAI models, the extensibility of Codex as an agentic harness, and our partners across the security flywheel to help make the world safer for everyone. Defenders can bring secure code review, threat modeling, patch validation, dependency risk analysis, detection, and remediation guidance into the everyday development loop so software becomes more resilient from the start,” said OpenAI in a statement.

Daybreak Enters Growing Competition with Anthropic’s Mythos

The timing of Daybreak’s rollout places it squarely in competition with Anthropic’s Project Glasswing, which gained attention after reports that Claude Mythos helped identify and patch 271 vulnerabilities in the latest version of the Firefox browser, according to Mozilla.

Unlike Anthropic, which restricted Mythos over concerns that advanced cybersecurity models could be misused, OpenAI is positioning Daybreak as a commercially deployable solution for enterprises.

Daybreak relies on multiple OpenAI models, including GPT-5.5, GPT-5.5 with Trusted Access for Cyber, and GPT-5.5-Cyber, which OpenAI said is intended for “Preview access for specialized workflows, including authorized red teaming, penetration testing, and controlled validation.”

OpenAI has already partnered with cybersecurity and cloud firms, including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle and Akamai Technologies.

But AI is Already Helping Hackers Find New Exploits

OpenAI’s push into cyber defense, or for that matter, Anthropic’s Mythos, arrives as researchers warn that malicious actors are increasingly using AI to develop sophisticated cyberattacks.

According to a report released by Google’s Threat Intelligence Group (GTIG), researchers identified what they described as the first known case of hackers using artificial intelligence to develop a zero-day exploit, a software flaw unknown to developers that can be weaponized before fixes are available.

“For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI,” researchers said.

Google said it had “high confidence” that hackers used an AI model to identify and exploit a vulnerability capable of bypassing two-factor authentication through a Python-based exploit. The company added that the attack was disrupted before it could be used in a broader campaign.

“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter-discovery may have prevented its use,” Google said, noting it did not believe its own Gemini model was involved.

Cyber crime threat actors leveraged AI to identify and exploit zero-day vulnerability
Google Threat Intelligence Group

An Escalating AI Arms Race in Cybersecurity

Google researchers also reported growing evidence that hacking groups linked to the People’s Republic of China and the Democratic People’s Republic of Korea are experimenting with AI for vulnerability discovery, malware development and phishing operations.

Groups including APT45 and UNC2814 reportedly use large language models to scan for weaknesses and simulate expert auditing behavior.

The findings underscore a widening dilemma facing the cybersecurity industry. As companies like OpenAI and Anthropic develop AI systems to secure software at machine speed, threat actors are increasingly leveraging similar tools to automate attacks.

That tension now sits at the center of the emerging AI cybersecurity race, whether advanced AI can strengthen defenses quickly enough to stay ahead of attackers learning to weaponize the same technology.

Posted in

Stephanie Irvin

Leave a Comment