
Cybersecurity researchers have disclosed high-strength security flaws in code editor cursors powered by artificial intelligence (AI) that could lead to remote code execution.
The vulnerability tracked as CVE-2025-54136 (CVSS score: 7.2) has been called McPoison by checkpoint investigations due to the fact that software exploits habits in a way that handles changes to model the configuration of a context protocol (MCP) server.
“The cursor AI vulnerability allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file in a shared GitHub repository or editing the file locally on the target machine,” said in an advisory released last week.
“If a collaborator accepts a harmless MCP, the attacker can quietly exchange it for a malicious command (e.g. calc.exe) without triggering or reconfiguring a warning.”
MCP is an open standard developed by humanity, allowing large-scale language models (LLMs) to interact with external tools, data, and services in a standardized way. It was introduced by an AI company in November 2024.
For each checkpoint, CVE-2025-54136 concerns how an attacker can change the behavior of the MCP configuration after the user has approved it in the cursor. Specifically, expand as follows:
Add a benign-looking MCP configuration (“.cursor/rules/mcp.json”) to the shared repository.
The underlying problem here is that once a configuration is approved, it is trusted by the cursor for future executions indefinitely, even if it is changed. The success of vulnerability exploitation not only exposes organizations to supply chain risks, but also opens the door to the theft of data and intellectual property without their knowledge.
Following the responsible disclosure of July 16, 2025, this issue is addressed by a cursor in version 1.3 released in late July 2025 by requesting user approval whenever an entry in the MCP configuration file is changed.

“This flaw reveals the critical weaknesses of the trust model behind the AI-assisted development environment and raises the interests of teams who integrate LLM and automation into their workflows,” Checkpoint said.
The development comes days after AIM Lab, Backslash Security, and HiddenLayer exposed multiple weaknesses to AI tools. This could have been abused to get remote code execution and bypass Dennilist-based protection. It has also been patched in version 1.3.
The findings are consistent with the increased adoption of AI in business workflows, including the use of LLM for code generation, AI supply chain attacks, insecure code, model addiction, rapid infusion, hallucinations, inappropriate reactions, and data leaks, spreading the attack surface.
Over 100 LLM tests on the ability to write Java, Python, C# and JavaScript code revealed that 45% of the generated code samples failed security tests, introducing OWASP top 10 security vulnerabilities. Java led with a security failure rate of 72%, followed by C# (45%), JavaScript (43%) and Python (38%). An attack called LegalPWN revealed that you can use your legal disclaimer, terms of service, or privacy policy as a new, rapid injection vector. We emphasize that text components, which are often legal and often overlooked, do not suggest that they provide unstable code by highlighting text components that are often overlooked to trigger unintended behavior in LLM. An attack called a prompt that opens a new browser tab in the background, launches an AI chatbot, injects malicious prompts to secretly extract data, and uses a Rogue browser extension that has no special authority to compromise the integrity of the model. This takes advantage of the fact that browser add-ons with script access to the Document Object Model (DOM) can read or write directly from the AI prompt. It operates LLM to accept logically invalid facilities and generates otherwise restricted output, thereby deceiving the model and breaking its own rules. An attack called MAS hijacking that manipulates the control flow of multi-agent systems (MAS) systems across domains, media, and topology by weaponizing the agent nature of AI systems. Posioned GPT Generation Unified Format (GGUF) templates are templates that target AI model inference pipelines by embedding malicious instructions within chat template files that run during the inference phase to compromise the output. By placing an attack between input validation and the output of the model, the approach is despicable and bypasses AI Guardrails. When GGUF files are distributed via services such as Face, the attack exploits the supply chain trust model to trigger the attack. Attackers can target machine learning (ML) training environments such as MLFLOW, Amazon Sagemaker, and Azure ML to compromise model confidentiality, integrity, availability, and ultimately train outer movements, privilege escalation, and data and model theft and addiction. Human studies have revealed that LLM can learn hidden traits during distillation. This is a phenomenon known as subliminal learning, in which models send behavioral characteristics through generated data that are completely unrelated to those characteristics, potentially leading to inconsistent and harmful behavior.

“As large-scale language models are deeply embedded in agent workflows, enterprise capulots, and developer tools, the risks pose by these jailbreaks escalate significantly,” says Dor Sarig of Pillar Security. “Most jailbreaks can propagate through a contextual chain, infecting one AI component, leading to cascade logic failures across interconnected systems.”
“These attacks highlight that AI security needs a new paradigm because it bypasses traditional safeguards without relying on architectural flaws or CVE. The vulnerabilities are in the language and are designed to emulate the models.”
Source link