
The rise of MCP in the enterprise
Model Context Protocol (MCP) is rapidly becoming a practical way to move LLM from “chat” to real work. By providing structured access to applications, APIs, and data, MCP enables prompt-driven AI agents that can retrieve information, take actions, and automate end-to-end business workflows across the enterprise. This is already showing up in production through horizontal assistants and custom vertical agents. Custom and vertical agents are moving fast behind the scenes, including Microsoft Copilot, ServiceNow, Zendesk bots, and Salesforce Agentforce. This is consistent with Gartner’s recent Market Guide for Guardian Agents report, where analysts note that the rapid adoption of AI agents in enterprises has significantly outpaced the maturity of governance and policy controls needed to manage them.
We believe the main disconnect is that these AI “colleagues” do not look like humans.
Don’t join or leave through HR Don’t submit access requests Don’t retire accounts at the end of a project
These are often invisible to traditional IAM, which is the risk of identity dark matter, or real identity outside of the governance structure. In addition to using access, the agent system also looks for the path of least resistance. They are optimized to get the job done with minimal friction, with fewer approvals, prompts, and blockers. From an identity perspective, that means gravitating toward things that already work, such as in-app local accounts, old service IDs, long-lived tokens, API keys, and authentication pass bypasses, and if they work, they’ll be reused.
Team8’s 2025 CISO Village Survey found:
Nearly 70% of enterprises already have AI agents (systems that can respond and act) running in production. Another 23% plan to deploy in 2026, and two-thirds are building in-house.
Adopting MCP is not a question of whether or not. The question is how to do it quickly and wisely. It’s already here and it’s only going to accelerate. Compounding this is the reality of hybrid environments. According to Gartner research, organizations face significant hurdles in managing these non-human identities because native platform controls and vendor protections typically do not extend beyond the boundaries of their own cloud or platform. Without an independent monitoring mechanism, cloud-to-cloud agent interactions remain completely unmanaged. The real question is: Will AI agents become trusted teammates or dark matter of unmanaged identity?
How identity dark matter can be exploited by agent AI
As an autonomous AI agent that can plan and execute multi-step tasks with minimal human input, Agent AI is a powerful assistant, but also a significant cyber risk. Interestingly, leading industry analysts seem to expect that the majority of rogue agent actions will result from corporate internal policy violations, such as misbehaving AI or oversharing information, rather than malicious external attacks.
The typical abuse patterns we see are similar, driven by agent automation and shortcut-seeking.
Enumerate what exists: The agent crawls apps and integrations, lists users/tokens, and discovers “alternative” authentication paths. Start with the easy ones: local accounts, legacy authentication, long-lived tokens, anything that can avoid new authorizations. Lock down “good enough” access: Low privileges are enough to pivot things like reading configuration files, retrieving logs, discovering secrets, mapping organizational structures, and more. Upgrade silently: Find out-of-scope tokens, outdated entitlements, or dormant but privileged identities and escalate with minimal noise. Operates at machine speed: Thousands of small actions occur in many systems, too fast and too wide for humans to detect early.
The real risk here is the scale of the impact. One ignored identity becomes a reusable shortcut across your assets.
dark matter risk
If left unchecked, MCP agents (AI agents that connect to apps, A2A, APIs, and data sources using the MCP protocol) not only exploit identity dark matter but also introduce their own hidden exposures. Orchid reveals the following revelations every day:
Over-granted access: To avoid failure, the agent goes into “god mode” and its privileges become its default operating state. Untracked usage: Agents can run sensitive workflows through tools that have partial, inconsistent logs, or are not associated with a sponsor. Static credentials: Hardcoded tokens are not only “forever” but also a shared infrastructure across agents, pipelines, and environments. Regulatory blind spots: Auditors ask, “Who authorized access, who used it, and what data was touched?” Dark matter makes those answers slow or impossible. Privilege Drift: Because removing privileges is scarier than granting them, agents accumulate access over time until an attacker inherits the drift.
We believe that addressing these blind spots is consistent with Gartner’s view that modern AI governance requires tightly merging identity and access management with information governance. This allows organizations to dynamically classify data sensitivity and monitor agent behavior in real-time, rather than relying solely on static credentials.
AI agents are more than just users without badges. They are the identity of dark matter, powerful, invisible, and outside the scope of today’s IAM. And the unpleasant part is that even well-intentioned agents can misuse dark matter. They don’t understand your org chart or your governance intent. They understand what works. If orphaned accounts or out-of-scope tokens are the fastest path to completion, that’s the “efficient” option.

Principles for safely deploying MCP
To avoid repeating the mistakes of the past (such as orphaned or overprivileged accounts, shadow IT, unmanaged keys, and invisible activity), organizations must adapt and apply core identity principles to AI agents. Gartner introduced the concept of a specialized “Guardian” system, a surveillance AI solution that continuously assesses, monitors, and enforces the boundaries of agents in operation.
We recommend that organizations follow five basic principles when implementing an MCP-based agent solution.
Pair AI agents with human sponsors: Every agent must be paired with a responsible human operator. When a human changes roles or leaves the company, the agent’s access must change accordingly. We agree with Gartner on the need for ownership mapping, ensuring the full lineage from creation to deployment is traced to both the machine and its human owner. Dynamic, context-aware access: AI agents should not have persistent, persistent permissions. Their entitlement must be time-limited, session-aware, and limited to least privilege. Visibility and Auditability: Gartner is increasingly requiring organizations to maintain a centralized AI agent catalog that records a list of all official, shadow, and third-party agents, along with comprehensive posture management and a tamper-proof audit trail. In our view, every action taken by an AI agent should be recorded, associated with a human sponsor, and made available for review. This ensures accountability and prepares your organization for future compliance inspections. Visibility is more than just “recorded.” Actions should be tied to data reach: what the agent accessed, modified, or exported, and whether the action touched a regulated or sensitive dataset. Otherwise, you won’t be able to distinguish between “useful automation” and “silent data movement.” Enterprise-wide governance: MCP deployments must extend to both new and legacy systems within a single, consistent governance fabric so that security, compliance, and infrastructure teams do not work in silos. This is also where Gartner emphasizes the importance of a company-owned monitoring layer to ensure consistent control and reduce the risk of vendor lock-in as MCP adoption grows. Commitment to good IAM hygiene: Maintaining strong hygiene on application and MCP servers, as well as all identities, authentication flows, authorization privileges, and implemented controls, is critical to keeping all users within the proper bounds.

big picture
AI agents pose unique challenges beyond simple integration. These represent changes in the way work is delegated and performed within a company. If left unmanaged, it will follow the same trajectory as other hidden identities. Things like in-app local accounts, old service IDs, long-lived tokens, API keys, and bypass authentication passes that have become identity dark matter over time. And because LLM-driven agents are optimized for efficiency, minimal friction, and minimal steps, they naturally gravitate toward these unmanaged identities as the fastest path to success. If an orphaned local admin or out-of-scope token “works”, the agent will use it and reuse it.
The opportunity is to stay ahead of the curve.
By treating AI agents as first-class identities (discoverable, manageable, and auditable) from day one, organizations can leverage their potential without creating blind spots.
Companies that do this will not only reduce their immediate attack surface, but also be able to meet regulatory and operational expectations that are sure to come.
In reality, most Agent-AI incidents do not start at zero day. Attacks start with an identity shortcut that someone forgot to clean up and are amplified by automation until it appears to be a systemic compromise.
conclusion
AI agents are here. They are already changing the way companies operate.
The challenge is not whether to use them, but how to manage them.
To securely deploy MCP, the same principles that identity practitioners are familiar with, least privilege, lifecycle management, and auditability, must be applied to the new class of non-human identities that follow this protocol.
If identity dark matter is a collection of things we cannot see or control, then unsupervised AI agents could be its fastest growing source. Organizations that act now to uncover these will be the ones that can quickly leverage AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building an identity infrastructure to eliminate dark matter and enable secure deployment of agent AI at enterprise scale.
Source link
