Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

DiDAX: Innovating DNA-based data applications

Claude Opus 4.6 discovers over 500 high-severity flaws across major open source libraries

Reddit sees AI search as its next big opportunity

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Google adds layered defenses to Chrome to block indirect prompt injection threats
Identity

Google adds layered defenses to Chrome to block indirect prompt injection threats

userBy userDecember 9, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Google on Monday announced a series of new security features for Chrome as it adds agent-based artificial intelligence (AI) capabilities to its web browser.

To this end, the tech giant said it has implemented defense-in-depth to make it difficult for malicious parties to exploit indirect prompt injections that occur as a result of exposure to untrusted web content to cause damage.

Chief among the features is the User Alignment Critic, which uses a second model to independently evaluate agent actions in a way that is isolated from malicious prompts. This approach complements Google’s existing techniques, such as Spotlight, which instructs models to follow user and system instructions rather than following content embedded in web pages.

“User Alignment Critic is performed after planning is completed and double-checks each proposed action,” Google said. “Its main focus is on task coordination and determining whether the proposed action meets the user’s stated goals. If the action is not coordinated, the coordination critic will veto it.”

This component is designed to only display metadata about the proposed action and cannot access untrusted web content, so it cannot be contaminated by malicious prompts that your website may contain. The idea of ​​the User Alignment Critic is to provide a safeguard against malicious attempts to steal data or hijack the intended purpose to do the attacker’s bidding.

“When an action is denied, Critic provides feedback to the planning model to re-formulate the plan. If failures repeat, the planner can hand control back to the user,” said Nathan Parker from the Chrome Security Team.

Google also enforces what it calls agent origin sets, ensuring that agents can only access data from origins that are relevant to the task at hand, or data sources that you choose to share with agents. This is intended to address site isolation bypass, which allows a compromised agent to interact with any site and exfiltrate data from logged-in sites.

cyber security

This is implemented by a gate function that determines which origins are relevant to a task and categorizes them into two sets.

Read-only origin. Google’s Gemini AI model will be allowed to use the content. Read/write origin. Agents can type or click in addition to reading.

“This boundary ensures that data from only a limited set of sources is available to the agent, and that this data is passed only to those sources that it can write to,” Google explained. “This limits the threat vector of data leakage beyond the origin.”

Similar to User Alignment Critic, gate functionality is not exposed to untrusted web content. The planner can use the web page context that the user has explicitly shared in the session, but it must also get approval from the gating feature before adding a new origin.

Another key pillar underpinning the new security architecture relates to transparency and user control, allowing agents to create work logs for user observability and request explicit approval before navigating to sensitive sites such as banking or medical portals, as well as allowing sign-in via Google Password Manager and completing web actions such as purchases, payments, and sending messages.

Finally, the agent checks each page for indirect prompt injections and works in parallel with Safe Browsing and on-device fraud detection to block potentially suspicious content.

Google said, “This prompt injection classifier runs in parallel with the planning model’s inference to prevent actions from being taken based on content that the classifier determines is intentionally targeting the model to do something inconsistent with the user’s goals.”

To further encourage research and poke holes in the system, the company said it would pay up to $20,000 for demonstrations that lead to breaches of security boundaries. These include indirect prompt injections that allow attackers to:

Performing fraudulent activities without confirmation Extracting sensitive data without having an effective opportunity to obtain user approval Bypassing mitigations that ideally would have prevented a successful attack in the first place

“By extending some core principles like origin separation and defense in depth and introducing a trusted model architecture, we’re building a secure foundation for Gemini’s agent experience on Chrome,” Google said. “We remain committed to continuous innovation and collaboration with the security community to help Chrome users safely explore a new era of the web.”

cyber security

This announcement follows a Gartner study that called on enterprises to block the use of agent AI browsers until associated risks such as indirect prompt injection, agent malfunction, and data loss are properly managed.

The study also warned of a possible scenario in which employees “may be tempted to use AI browsers to automate certain tasks that are essential, repetitive, and of little interest.” This could cover cases where individuals circumvent mandatory cybersecurity training by instructing an AI browser to complete it on their behalf.

“Agent browsers, or what many refer to as AI browsers, have the potential to transform the way users interact with websites and automate transactions, while also posing significant cybersecurity risks,” the advisory firm said. “CISOs should block all AI browsers for the foreseeable future to minimize potential risk exposure.”

The development comes after the US National Cyber ​​Security Center (NCSC) said that large-scale language models (LLMs) may be affected by a persistent vulnerability known as prompt injection, and that it will never be possible to fully resolve the issue.

“Current large-scale language models (LLMs) simply do not enforce security boundaries between instructions and data within a prompt,” said David C., technical director of platform research at NCSC. “Therefore, protection design should focus on deterministic (non-LLM) protection measures that limit system behavior, rather than simply preventing malicious content from reaching the LLM.”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleUK-Germany £14m quantum technology partnership announced
Next Article How to streamline zero trust using the shared signals framework
user
  • Website

Related Posts

Claude Opus 4.6 discovers over 500 high-severity flaws across major open source libraries

February 6, 2026

AISURU/Kimwolf botnet launches record 31.4 Tbps DDoS attack

February 5, 2026

Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

February 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

DiDAX: Innovating DNA-based data applications

Claude Opus 4.6 discovers over 500 high-severity flaws across major open source libraries

Reddit sees AI search as its next big opportunity

Amazon and Google are winning the AI ​​capital spending race, but what is the prize?

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.