
Google’s Deepmind division announced on Monday an agent powered by artificial intelligence (AI) called CodeMender, which automatically detects, patches and rewrites vulnerable code to prevent future exploits.
This effort adds to the company’s ongoing efforts to improve the discovery of AI-powered vulnerabilities, such as Big Sleep and OSS-Fuzz.
According to Deepmind, AI agents are designed to be reactive and aggressive by fixing new vulnerabilities as soon as they are discovered, and rewrite and fixing existing codebases with the aim of eliminating an entire class of vulnerabilities in the process.
“By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agents help developers and maintainers focus on building great software,” said Raluca Ada Popa and Four Flynn, researchers at DeepMind.

“In the last six months we’ve been building CodeMender, we’ve already streamed 72 security fixes upstream into open source projects.
Under the hood, CodeMender leverages Google’s Gemini Deep Think model to leverage a model to debug, flag and fix security vulnerabilities by addressing the root cause of the problem, and validates it to avoid causing regression.
The AI agent added by Google uses a large-scale language model (LLM)-based critique tool that highlights the differences between the original and revised codes to ensure that the proposed changes do not introduce regression.
Google said it is slowly reaching out to interested maintainers of key open source projects, using patches generated by Codemender, which allows the tool to be used to keep the codebase safe, and asking for feedback.

The development comes as the company has enacted the AI Vulnerability Rewards Program (AI VRP) to report AI-related issues, including rapid injection, jailbreak and inconsistency, and has earned rewards that reach as much as $30,000.
In June 2025, humanity revealed that various developer models relied on malicious insider behavior if it was the only way to avoid exchange or achieve their goals, and that the LLM model was “misleading when they said the situation was realistic, they said it was testing and stated that it was more false.”

That being said, content occurrence, guardrail bypass, hallucinations, de facto inaccuracies, rapid system extraction, and intellectual property issues do not fall within the scope of AI VRP.
Having previously set up a dedicated AI Red team to tackle threats to AI systems as part of the Secure AI Framework (SAIF), Google has also introduced a second iteration of the framework that focuses on agent security risks, such as data disclosure and unintended actions, as well as the necessary controls to mitigate them.
The company further said it is working to use AI to increase security and security, to use technology to give defenders an edge, and to counter threats from cybercriminals, fraudsters and state-backed attackers.
Source link