Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

NASA astronauts return to Earth after unprecedented medical emergency on ISS

Model security is the wrong framework – the real risk is workflow security

PFAS increase risk of gestational diabetes, major review finds

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Model security is the wrong framework – the real risk is workflow security
Identity

Model security is the wrong framework – the real risk is workflow security

userBy userJanuary 15, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

January 15, 2026hacker newsData security/artificial intelligence

Even as AI co-pilots and assistants become part of daily operations, security teams are still focused on securing the models themselves. However, recent incidents suggest that the greater risk lies elsewhere: in the workflows surrounding these models.

Two Chrome extensions masquerading as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers demonstrated how prompt injection hidden in a code repository could trick IBM’s AI coding assistant into running malware on a developer’s machine.

Neither attack destroyed the AI ​​algorithm itself.

They exploited the context in which the AI ​​operated. This is a pattern worth paying attention to. When AI systems are integrated into real-world business processes, summarizing documents, drafting emails, retrieving data from internal tools, etc., securing the model is not enough. The target is the workflow itself.

AI models are becoming workflow engines

To understand why this is important, consider how AI is actually used today.

Businesses are now using it to connect apps and automate tasks that were previously done manually. An AI writing assistant could retrieve sensitive documents from SharePoint and condense them into email drafts. A sales chatbot may cross-reference internal CRM records to answer customer questions. In each of these scenarios, the boundaries between applications are blurred and new integration paths are created on the fly.

What makes this dangerous is the way the AI ​​agent operates. They rely on probabilistic decision-making rather than hard-coded rules and generate output based on patterns and context. Carefully written input can prompt an AI to behave in ways its designers did not intend. AI doesn’t have a native concept of trust boundaries, so it follows them.

This means that the attack surface includes all inputs, outputs, and integration points that the model touches.

If an attacker can easily manipulate the context that a model sees or the channels it uses, there is no need to hack the model’s code. The incident mentioned above illustrates this. Prompt injections hidden in repositories hijack the AI’s behavior during routine tasks, allowing malicious extensions to siphon data from the AI’s conversations without ever touching the model.

Why traditional security controls aren’t enough

These workflow threats expose traditional security blind spots. Most traditional defenses were built around deterministic software, stable user roles, and clear boundaries. AI-driven workflows break all three assumptions.

Most popular apps distinguish between trusted code and untrusted input. AI models don’t. To them, everything is just text, so malicious instructions hidden in a PDF are no different from legitimate commands. The payload is not malicious code, so traditional input validation is useless. It’s just natural language. Traditional monitoring detects obvious anomalies such as large downloads or suspicious logins. But an AI reading 1,000 records as part of a routine query looks like normal service-to-service traffic. If that data is summarized and sent to the attacker, technically no rules have been broken. Most common security policies specify what is allowed or blocked. That is, do not allow this user to access that file, block traffic to this server, etc. However, AI behavior is context-dependent. How do you write a rule that says “never expose customer data in output”? Security programs rely on regular reviews and fixed configurations, such as quarterly audits and firewall rules. AI workflows are not static. Integrations may add new functionality after updating or connecting to new data sources. By the time the quarterly review occurs, the tokens may have already been leaked.

Securing AI-driven workflows

So a better approach to all of this is to treat the entire workflow as protected, not just the model.

Start by understanding where AI is actually used, from official tools like Microsoft 365 Copilot to browser extensions that employees install on their own. Understand what data each system can access and what actions it can take. Many organizations are surprised to find that they have dozens of shadow AI services running across their business. If your AI assistant is only for internal summaries, limit sending external emails. Scan output before sensitive data leaves your environment. These guardrails should exist outside of the model itself, i.e. within middleware that checks before actions are performed. Treat AI agents like any other user or service. If your AI only needs read access to one system, don’t give it comprehensive access to everything. Limit the scope of OAuth tokens to the minimum necessary privileges and monitor for anomalies such as AI suddenly accessing data it has never touched before. Finally, it’s also helpful to educate users about the risks of copying unvetted browser extensions and prompts from unknown sources. Thoroughly vet third-party plugins before deploying them and treat tools that interact with AI inputs or outputs as part of your security perimeter.

How a platform like Reco can help

In reality, you cannot scale if you do all this manually. As a result, a new category of tools is emerging: dynamic SaaS security platforms. These platforms act as a real-time guardrail layer on top of AI-powered workflows, learning what normal behavior looks like and flagging anomalies when they occur.

Reco is one of the prime examples.

Figure 1: Reco Generate AI Application Discovery

As shown above, the platform gives security teams visibility into AI usage across the organization, revealing which generative AI applications are being used and how they are connected. From there, you can apply guardrails at the workflow level to catch risky behavior in real time and maintain control without slowing down your business.

Request a Demo: Get started with Reco.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePFAS increase risk of gestational diabetes, major review finds
Next Article NASA astronauts return to Earth after unprecedented medical emergency on ISS
user
  • Website

Related Posts

4 outdated habits that will destroy your SOC’s MTTR in 2026

January 15, 2026

Microsoft legal action disrupts RedVDS cybercrime infrastructure used for online fraud

January 15, 2026

Palo Alto fixes GlobalProtect DoS flaw that could crash firewall without logging in

January 15, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

NASA astronauts return to Earth after unprecedented medical emergency on ISS

Model security is the wrong framework – the real risk is workflow security

PFAS increase risk of gestational diabetes, major review finds

Dark matter may have started at a much higher temperature than scientists thought

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.