
Cybersecurity researchers have revealed multiple security vulnerabilities in Anthropic’s Claude Code, an artificial intelligence (AI)-powered coding assistant, that could allow remote code execution and theft of API credentials.
“This vulnerability exploits various configuration mechanisms, including hooks, Model Context Protocol (MCP) servers, and environment variables, to execute arbitrary shell commands and leak Anthropic API keys when a user clones and opens an untrusted repository,” Check Point Research said in a report shared with Hacker News.
The identified shortcomings fall into three broad categories.
No CVE (CVSS score: 8.7) – Code injection vulnerability due to user consent bypass when starting Claude Code in a new directory. It may be possible to execute arbitrary code via an untrusted project hook defined in .claude/settings.json without additional verification. (Fixed in version 1.0.87 in September 2025) CVE-2025-59536 (CVSS score: 8.7) – Code injection vulnerability that allows users to automatically execute arbitrary shell commands during tool initialization if a user launches Claude Code in an untrusted directory. (Fixed in October 2025 in version 1.0.111) CVE-2026-21852 (CVSS Score: 5.3) – An information disclosure vulnerability in Claude Code’s project load flow could allow a malicious repository to leak data containing Anthropic API keys. (Fixed in version 2.0.65, January 2026)
“If a user launches Claude Code on an attacker controller repository, and that repository includes a configuration file that sets ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code will issue an API request before displaying the trust prompt, which includes the potential for disclosure of the user’s API key,” Anthropic said in an advisory for CVE-2026-21852.
In other words, simply opening a crafted repository is enough to extract a developer’s active API key, redirect authenticated API traffic to external infrastructure, and capture credentials. This allows attackers to penetrate deeper into the victim’s AI infrastructure.
This can include accessing shared project files, modifying/deleting data stored in the cloud, uploading malicious content, and even incurring unexpected API costs.
Successful exploitation of the initial vulnerability could trigger stealth execution on the developer’s machine without any additional action other than starting the project.
CVE-2025-59536 accomplishes a similar goal, but the key difference is that repository-defined configurations defined through .mcp.json and claude/settings.json files can be exploited by an attacker to override explicit user authorization before interacting with external tools or services through Model Context Protocol (MCP). This is achieved by setting the ‘enableAllProjectMcpServers’ option to true.
“Once an AI-powered tool gains the ability to execute commands, initialize external integrations, and initiate network communications autonomously, configuration files effectively become part of the execution layer,” Check Point said. “What was once thought of as the operational context now directly impacts the behavior of the system.”
“This fundamentally changes the threat model. Risk is no longer limited to running untrusted code, but extends to opening untrusted projects. In an AI-driven development environment, the supply chain starts not only with the source code, but also with the automation layers surrounding it.”
Source link
