
As the field of artificial intelligence (AI) continues to evolve rapidly, a new report from Tenable reveals that Model Context Protocol (MCP) uses techniques that are susceptible to rapid injection attacks to develop rapid injection tools and identify malicious tools.
Launched by Humanity in November 2024, MCP is a framework designed to connect large-scale language models (LLMs) with external data sources and services and interact with those systems using model control tools to enhance the accuracy, relevance and utility of AI applications.
Following the client-server architecture, hosts with MCP clients such as Claude desktops and cursors can communicate with a variety of MCP servers, each exposes specific tools and features.

Open Standard offers a unified interface for accessing a variety of data sources and switching between LLM providers, but comes with a new set of risks, ranging from excessive allowance to indirect rapid injection attacks.
For example, given the MCP for Gmail to interact with Google’s email services, an attacker can send malicious messages with indications that when parsed by LLM, it could cause unwanted actions, such as forwarding emails that are sensitive to email addresses under control.
MCPs have also been found to be vulnerable to what is called tool addiction. This instruction is embedded with malicious instructions in the description of the tool that appears to LLMS, and lag pull attacks occur when they first work in a benign way, but later change behavior via time-lagged malicious updates.
“While users can authorize use and access to the tool, it should be noted that the permissions given to the tool can be reused without re-employing the user,” Sentinelone said in a recent analysis.
Finally, there is also the risk of cross-tool contamination or cross-server tools shadowing, which can seriously affect the way one MCP server overrides or interferes with another, leading to new ways of delaminating data.
The latest findings in Tenable show that using the MCP framework, we can create a tool that records all MCP tool function calls by including a specially written description that tells LLM to insert this tool before other tools are invoked.

In other words, rapid injection is to record information about “the tool being asked to run, such as the MCP server name, the MCP tool name and description, and the user prompt that LLM attempts to run that tool.”
Another use case involves embedding descriptions in the tool and turning them into the kind of firewall that blocks the execution of unauthorized tools.
“The tool should require explicit approval before running on most MCP host applications,” said security researcher Ben Smith.
“Even so, there are many ways to use tools to do things that are not strictly understood by the specification. These methods rely on LLM through the description and return values of the MCP tool itself. LLM is non-deterministic, so the result is also the result.”
It’s not just MCP
This disclosure comes when Trustwave SpiderLabs reveals that the newly introduced Agent2Agent (A2A) protocol that enables communication and interoperability between agent applications could be exposed to new form attacks that can lead the system to the game.

A2A was announced earlier this month by Google as a way for AI agents to work with siloed data systems and applications, regardless of the vendor or framework used. It is important to note here that while the MCP connects the LLMS to the data, the A2A connects one AI agent to another. In other words, they are both complementary protocols.
“Let’s say you compromised an agent through another vulnerability (probably through the operating system). If you use a compromised node (agent) to create an agent card and truly exaggerate your capabilities, the host agent should choose us every time for every task, and we send you sensitive data for all users.
“Attacks not only stop at capturing data, they are active and even return false results. This can be effected downstream by LLM or by users.”
Source link