
Even as AI co-pilots and assistants become part of daily operations, security teams are still focused on securing the models themselves. However, recent incidents suggest that the greater risk lies elsewhere: in the workflows surrounding these models.
Two Chrome extensions masquerading as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injection hidden in a code repository could trick IBM’s AI coding assistant into running malware on a developer’s machine.
Neither attack destroyed the AI algorithm itself.
They exploited the context in which the AI operated. This is a pattern worth paying attention to. When AI systems are integrated into real-world business processes, summarizing documents, drafting emails, retrieving data from internal tools, etc., securing the model is not enough. The target is the workflow itself.
AI models are becoming workflow engines
To understand why this is important, consider how AI is actually used today.
Businesses are now using it to connect apps and automate tasks that were previously done manually. An AI writing assistant could retrieve sensitive documents from SharePoint and condense them into email drafts. A sales chatbot may cross-reference internal CRM records to answer customer questions. In each of these scenarios, the boundaries between applications are blurred and new integration paths are created on the fly.
What makes this dangerous is the way the AI agent operates. They rely on probabilistic decision-making rather than hard-coded rules and generate output based on patterns and context. Carefully written input can prompt an AI to behave in ways its designers did not intend. AI doesn’t have a native concept of trust boundaries, so it follows them.
This means that the attack surface includes all inputs, outputs, and integration points that the model touches.
If an attacker can easily manipulate the context that a model sees or the channels it uses, there is no need to hack the model’s code. The incident mentioned above illustrates this. Prompt injections hidden in repositories hijack the AI’s behavior during routine tasks, allowing malicious extensions to siphon data from the AI’s conversations without ever touching the model.
Why traditional security controls aren’t enough
These workflow threats expose traditional security blind spots. Most traditional defenses were built around deterministic software, stable user roles, and clear boundaries. AI-driven workflows break all three assumptions.
Most popular apps distinguish between trusted code and untrusted input. AI models don’t. To them, everything is just text, so malicious instructions hidden in a PDF are no different from legitimate commands. The payload is not malicious code, so traditional input validation is useless. It’s just natural language. Traditional monitoring detects obvious anomalies such as large downloads or suspicious logins. But an AI reading 1,000 records as part of a routine query looks like normal service-to-service traffic. If that data is summarized and sent to the attacker, technically no rules have been broken. Most common security policies specify what is allowed or blocked. That is, do not allow this user to access that file, block traffic to this server, etc. However, AI behavior is context-dependent. How do you write a rule that says “never expose customer data in output”? Security programs rely on regular reviews and fixed configurations, such as quarterly audits and firewall rules. AI workflows are not static. Integrations may add new functionality after updating or connecting to new data sources. By the time the quarterly review occurs, the tokens may have already been leaked.
Securing AI-driven workflows
So a better approach to all of this is to treat the entire workflow as protected, not just the model.
Start by understanding where AI is actually used, from official tools like Microsoft 365 Copilot to browser extensions that employees install on their own. Understand what data each system can access and what actions it can take. Many organizations are surprised to find that they have dozens of shadow AI services running across their business. If your AI assistant is only for internal summaries, limit sending external emails. Scan output before sensitive data leaves your environment. These guardrails should exist outside of the model itself, i.e. within middleware that checks before actions are performed. Treat AI agents like any other user or service. If your AI only needs read access to one system, don’t give it comprehensive access to everything. Limit the scope of OAuth tokens to the minimum necessary privileges and monitor for anomalies such as AI suddenly accessing data it has never touched before. Finally, it’s also helpful to educate users about the risks of copying unvetted browser extensions and prompts from unknown sources. Thoroughly vet third-party plugins before deploying them and treat tools that interact with AI inputs or outputs as part of your security perimeter.
How a platform like Reco can help
In reality, you cannot scale if you do all this manually. As a result, a new category of tools is emerging: dynamic SaaS security platforms. These platforms act as a real-time guardrail layer on top of AI-powered workflows, learning what normal behavior looks like and flagging anomalies when they occur.
Reco is one of the prime examples.
Figure 1: Reco Generate AI Application Discovery
As shown above, the platform gives security teams visibility into AI usage across the organization, revealing which generative AI applications are being used and how they are connected. From there, you can apply guardrails at the workflow level to catch risky behavior in real time and maintain control without slowing down your business.
Request a Demo: Get started with Reco.
Source link
