
The weaknesses of security are revealed in an AI-powered code editor cursor that can trigger code execution when a maliciously created repository is opened using a program.
This issue is due to the fact that immediate access security settings are disabled by default, opening the door for attackers to execute arbitrary code on the user’s computer with privileges.
“A cursor with workspace trust disabled by default has a code-style task configured with runoptions.runon. “Malicious .vscode/tasks.json turns casual ‘open folders’ into silent code execution in the user’s context. ”
Cursor is an AI-powered fork in Visual Studio code that supports a feature called Workspace Trust, allowing developers to safely view and edit their code, regardless of where they came from or who wrote it.
If this option is disabled, attackers can make the project available on GitHub (or any platform) and include a hidden “Autorun” instruction that tells the IDE to perform the task as soon as the folder is opened.
“This has the potential to leak sensitive credentials, modify files, or act as a vector for broader system compromises, putting cursor users at a serious risk from supply chain attacks.”
To combat this threat, users are encouraged to enable workplace trust with their cursor, open untrusted repository in another code editor, and audit them before opening it with the tool.

Development emerges as a stealth and systemic threat that plagues AI-driven coding and inference agents such as Claude Code, Cline, K2 Think, and Windsurf, allowing threat actors to embed malicious instructions in a malicious way, taking malicious actions and leaking data from the software development environment.
Software Supply Chain Security Costume CheckMarx revealed in a report last week that an automated security review with Anthropic’s newly introduced Claude code allows projects to inadvertently expose themselves to security risks.
“In this case, careful comments can convince Claude that even obviously dangerous code is completely safe,” the company said. “The end result: developers can think it’s safe to think of Claude easily as a vulnerability, whether they’re malicious or just trying to close Claude.”
Another problem is that the AI inspection process generates and runs test cases. This can lead to scenarios in which malicious code is executed against the production database if the Claude code is not properly sandboxed.

Additionally, the AI company that recently launched the ability to create and edit new files on Claude warns that the feature has rapid injection risks as it runs in a “sandbox computing environment with limited internet access.”
Specifically, it is possible for bad actors to add “inconspicuous” instructions via external files or websites (indirect rapid injection). This will trick the chatbot to download and run or read sensitive data from connected knowledge sources via the Model Context Protocol (MCP).
“This means that you can trick Claude into sending information to malicious third parties from that context (such as prompts via MCP, projects, data, Google integrations, etc.),” Anthropic said. “To mitigate these risks, we recommend using features to monitor Claude and stop it if you are using or accessing data.”
That’s not all. Later last month, the company also revealed that browser-using AI models like Claude for Chrome could face rapid injecting attacks, and implemented several defenses to address the threat and reduce the attack success rate of 23.6% to 11.2%.
“New forms of rapid spurt attacks are also constantly being developed by malicious actors,” he added. “By revealing real-world examples of insecure behaviors and new attack patterns that do not exist in controlled testing, we teach the model to recognize attacks and explain relevant behaviors, ensuring that the safety classifier receives everything the model itself has missed.”

At the same time, these tools are known to be susceptible to traditional security vulnerabilities and broaden their attack surface with potential real-world influences –
A WebSocket authentication bypass in Claude Code IDE extensions (CVE-2025-52882, CVSS score: 8.8) that could have allowed an attacker to connect to a victim’s unauthenticated local WebSocket server simply by luring them to visit a website under their control, enabling remote command execution An SQL injection vulnerability in the Postgres MCP server that could have allowed an attacker to bypass the read-only restraint A Microsoft NLWeb path traversal vulnerability that executes arbitrary SQL statements could have allowed a remote attacker to read a sensitive file containing system configuration (“/etc/passwd”) or cloud credentials (.env files). An attacker who admitted to reading and writing to any database table in a generated site can now open a Base44 Cross-Site Script (XSS) that could allow attackers to access the victim’s app and development workspace, save cross-Site Script (XSS), leak vulnerabilities, and access applications that are difficult to access malicious logic when decking APIs. If it results from incomplete cross-origin control that could allow an attacker to host a drive-by attack, when you visit a malicious website, you can also reconfigure the application settings to intercept the chat and modify the response using the addiction model.
“As AI-driven development accelerates, the most pressing threat is often not exotic AI attacks, but classical security control failures,” Imperva said. “To protect the growing ecosystem of “vibe coding” platforms, security must be treated as a foundation rather than an afterthought. ”
Source link