Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

From Svedka to Anthropic, brands are boldly leveraging AI in their Super Bowl ads

Prince Andrew’s advisor encouraged Jeffrey Epstein to invest in EV startups like Lucid Motors

AI agents could become lawyers after all

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks
Identity

Researchers discover more than 30 flaws in AI coding tools that enable data theft and RCE attacks

userBy userDecember 6, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

December 6, 2025Ravi LakshmananAI security/vulnerabilities

More than 30 security vulnerabilities have been uncovered in various artificial intelligence (AI)-powered integrated development environments (IDEs) that combine prompt injection primitives with legitimate functionality to enable data exfiltration and remote code execution.

These security flaws were collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). These affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Of these, 24 have been assigned a CVE identifier.

“I think the most surprising finding of this study is the fact that multiple universal attack chains affected all AI IDEs tested,” Marzouk told The Hacker News.

“All AI IDEs (and the coding assistants that integrate with them) effectively ignore the underlying threat modeling software (IDE). They treat that functionality as inherently secure because it’s been around for years. But when you add AI agents that can operate autonomously, that same functionality can be weaponized as data leakage or RCE primitives.”

The core of these issues cascades three different vectors that are common to AI-driven IDEs.

Bypass large-scale language model (LLM) guardrails to hijack context and do the attacker’s bidding (aka prompt injection) Perform specific actions without user interaction via automated authorization tool invocations of the AI ​​agent Trigger legitimate functionality in the IDE that allows an attacker to breach security boundaries and leak sensitive data or execute arbitrary commands

The highlighted issue differs from previous attack chains that leverage prompt injection in conjunction with vulnerable tools (or abuse legitimate tools to perform read or write actions) to modify the configuration of an AI agent to execute code or perform other unintended behaviors.

cyber security

What’s notable about IDEsaster is that it requires prompt injection primitives and agent tools and uses them to activate legitimate functionality of the IDE, leading to information disclosure and command execution.

Context hijacking can be accomplished in a myriad of ways, including through user-added context references, which take the form of pasted URLs, text containing hidden characters that are invisible to the human eye but can be parsed by LLM. Alternatively, the context can be contaminated by using a Model Context Protocol (MCP) server through tool poisoning or lag pulling, or when a legitimate MCP server parses attacker-controlled input from an external source.

Some of the attacks confirmed to be enabled by the new exploit chain include:

CVE-2025-49150 (cursor), CVE-2025-53097 (Roo code), CVE-2025-58335 (JetBrains Junie), GitHub Copilot (no CVE), Kiro.dev (no CVE), and Claude Code (addressed with security warning) – Prompt injection can be used to exploit legitimate (‘read_file’) or vulnerable tools. (‘search_files’ or ‘search_project’) and writes a JSON file via a legitimate tool (‘write_file’ or ‘edit_file)’ with a remote JSON schema hosted on an attacker-controlled domain, data is exposed when the IDE makes a GET request CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo code), CVE-2025-55012 (Zed.dev), and Claude code (addressed with security warning) – Uses prompt injection to edit the IDE configuration file (“.vscode/settings.json” or “.idea/workspace.xml”) and execute code depending on the settings. “php.validate.executablePath” or “PATH_TO_GIT” to the path of the executable containing malicious code CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), and CVE-2025-58372 (Roo Code) – Prompt injection using workspace configuration file (*.code-workspace) Edit and override multiroot workspace settings to run code

Note that the last two examples rely on an AI agent configured to auto-approve file writes, which allows an attacker to influence the prompt and write malicious workspace settings. However, this behavior is auto-approved by default for files in the workspace, allowing arbitrary code to execute without requiring user interaction or reopening the workspace.

With a quick injection and jailbreak serving as the first step in the attack chain, Marzouk offers the following recommendations:

Use the AI ​​IDE (and AI agent) only with projects and files you trust. Malicious rule files, instructions hidden in source code or other files (READMEs), and even file names can become prompt injection vectors. Connect only to trusted MCP servers and continuously monitor these servers for changes (even trusted servers can be compromised). Review and understand the data flow of your MCP tools (e.g., legitimate MCP tools may obtain information from attacker-controlled sources such as GitHub PR). Manually check the added sources (e.g. via URL) to check for hidden instructions (e.g. comments in HTML / hidden text in CSS / hidden Unicode characters).

We recommend that developers of AI agents and AI IDEs apply the principle of least privilege in their LLM tools, minimize prompt injection vectors, harden system prompts, use sandboxing to execute commands, and perform security testing for path traversal, information disclosure, and command injection.

This disclosure coincided with the discovery of several vulnerabilities in AI coding tools that could have far-reaching implications.

OpenAI Codex CLI command injection flaw (CVE-2025-61260). It takes advantage of the fact that the program implicitly trusts commands configured via the MCP server entry and executes them at startup without asking for user permission. This could allow arbitrary commands to be executed if a malicious attacker is able to tamper with the repository’s “.env” and “./.codex/config.toml” files. Indirectly prompted injection into Google Antigravity using a tainted web source. Gemini can be manipulated to collect credentials and sensitive code from a user’s IDE and can be used to browse malicious sites and extract information using browser subagents. Google Antigravity contains multiple vulnerabilities that could lead to data disclosure or remote command execution via indirect prompt injection, or a persistent backdoor that could leverage a malicious trusted workspace to execute arbitrary code on each future application launch. A new class of vulnerabilities named PromptPwnd uses prompt injection to target AI agents connected to vulnerable GitHub Actions (or GitLab CI/CD pipelines) to run built-in privileged tools that can lead to information disclosure or code execution.

cyber security

As agent AI tools grow in popularity in enterprise environments, these findings demonstrate how AI tools expand the attack surface of development machines by exploiting LLM’s inability to distinguish between user-provided instructions to complete a task and content that may be ingested from external sources, resulting in potentially embedded malicious prompts.

“Repositories that use AI for issue triage, PR labeling, code suggestions, or automated responses are at risk of prompt injection, command injection, security leaks, repository compromise, and upstream supply chain compromise,” said Aikido researcher Layne Dahlman.

Marzouk also said the findings highlight the importance of “Secure for AI.” This is a new paradigm that researchers have devised to tackle the security challenges posed by AI capabilities, so that products are not only safe by default and secure by design, but are also thought through with an eye toward how AI components can be exploited over time.

“This is another example of why we need the ‘safe for AI’ principle,” Marzouk said. “Connecting an AI agent to an existing application (IDE in my case, GitHub Actions in their case) introduces new risks.”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article3I/ATLAS Photo: NASA and ESA release new images of interstellar comet ahead of approach to Earth
Next Article Creator IShowSpeed ​​has been sued for allegedly punching and choking the viral humanoid Rizzbot.
user
  • Website

Related Posts

The Legal Revolution is Digital: Meet TwinH, Your AI Partner in the Courtroom of the Future

February 6, 2026

China-linked DKnife AitM framework, routers targeted for traffic hijacking and malware distribution

February 6, 2026

CISA orders removal of unsupported edge devices to reduce risk to federal networks

February 6, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

From Svedka to Anthropic, brands are boldly leveraging AI in their Super Bowl ads

Prince Andrew’s advisor encouraged Jeffrey Epstein to invest in EV startups like Lucid Motors

AI agents could become lawyers after all

The Legal Revolution is Digital: Meet TwinH, Your AI Partner in the Courtroom of the Future

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.