Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

The dynamic duo that forms the power grid

Critical flaw in n8n (CVSS 9.9) allows arbitrary code execution across thousands of instances

FCC bans foreign-made drones and key components due to US national security risks

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » OpenAI says AI browsers can always be vulnerable to prompt injection attacks
Startups

OpenAI says AI browsers can always be vulnerable to prompt injection attacks

userBy userDecember 22, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

OpenAI is working to harden its Atlas AI browser against cyberattacks and acknowledges that prompt injection is a type of attack that manipulates an AI agent to follow malicious instructions hidden in web pages or emails. This is a risk that isn’t going away anytime soon, raising questions about how securely AI agents can operate on the open web.

“As with fraud and social engineering on the web, instant attacks are unlikely to be fully ‘solved’,” OpenAI said in a blog post on Monday, detailing how the company is hardening Atlas’ defenses to counter the constant attacks. The company acknowledged that ChatGPT Atlas’ “Agent Mode” “expands the surface of security threats.”

OpenAI announced its ChatGPT Atlas browser in October, and security researchers have rushed to release a demo showing that you can change the behavior of the underlying browser by writing a few words in a Google Doc. On the same day, Brave published a blog post explaining how indirect prompt injection is an organizational challenge for AI-powered browsers, including Perplexity’s Comet.

OpenAI isn’t the only company to realize that prompt-based injection isn’t going away. Earlier this month, the UK’s National Cyber ​​Security Center warned that prompt injection attacks on generative AI applications “may not be completely mitigated”, leaving websites at risk of falling victim to a data breach. UK government agencies have advised cyber experts to reduce the risk and impact of immediate injections, rather than thinking they can “stop” an attack.

Regarding OpenAI, the company said, “We believe rapid injection is a long-term AI security challenge, and we need to continually strengthen our defenses against it.”

What is the company’s answer to this Sisyphean-like challenge? The company says its proactive and rapid response cycle is showing early promise in helping discover new attack strategies internally before they are exploited “in the wild.”

This is not entirely different from what competitors like Anthropic and Google claim. This means defenses must be layered and continually stress-tested to combat the persistent risk of prompt-based attacks. For example, recent efforts at Google have focused on architectural and policy-level controls for agent systems.

But what OpenAI does differently is its “LLM-based automated attacker.” The attacker is essentially a bot trained by OpenAI using reinforcement learning to play the role of a hacker looking for a way to secretly send malicious instructions to an AI agent.

Bots can test attacks in a simulation before actually using them, and the simulator shows how the target AI will think and act if it recognizes the attack. The bot can then study that response, fine-tune its attack, and try again and again. In theory, OpenAI’s bots should be able to discover flaws faster than real-world attackers, since insights into the target AI’s internal reasoning are inaccessible to outsiders.

This is a common tactic in AI safety testing. Build an agent to find edge cases and quickly test it in simulation.

“Our [reinforcement learning]”Trained attackers can coax agents into executing long-lasting, sophisticated, and harmful workflows that unfold over dozens (or even hundreds) of steps. We also observed novel attack strategies that had not appeared in human red teaming efforts or external reports,” OpenAI wrote.

Screenshot showing a prompt injection attack on OpenAI browser.
Image credit: OpenAI

In a demo (partially pictured above), OpenAI showed how an automated attacker could sneak a malicious email into a user’s inbox. Later, when the AI ​​agent scanned the inbox, it followed the instructions hidden in the email and sent a resignation message instead of creating an out-of-office reply. However, the company says that after a security update, “Agent Mode” was able to successfully detect the prompt injection attempt and flag the user.

The company says prompt injections are difficult to defend against in a fool-proof manner, but it relies on extensive testing and faster patch cycles to harden systems before they appear in an actual attack.

An OpenAI spokesperson declined to say whether Atlas’ security updates led to a measurable reduction in successful injections, but said the company has been working with third parties to harden Atlas against rapid injections since before its launch.

Rami McCarthy, principal security researcher at cybersecurity firm Wiz, said reinforcement learning is one way to continually adapt to an attacker’s behavior, but it’s only part of the picture.

“A useful way to infer risk in an AI system is to multiply autonomy with access,” McCarthy told TechCrunch.

“Agent browsers tend to be at the difficult end of the spectrum, which is a combination of moderate autonomy and very high access,” McCarthy said. “Many of the current recommendations reflect that trade-off: Restricting login access primarily reduces risk, but requiring review of confirmation requests constrains autonomy.”

These are two of OpenAI’s recommendations to help users reduce their own risks, and a spokesperson said Atlas is also trained to obtain confirmation from users before sending messages or making payments. OpenAI also suggests that users give the agent specific instructions, rather than giving the agent access to their inbox and telling them to “perform the required action.”

According to OpenAI, “wide tolerance makes it easier for hidden or malicious content to impact agents, even when safety measures are in place.”

OpenAI says protecting Atlas users from prompt injections is a top priority, but McCarthy is skeptical about the return on investment for the risk-prone browser.

“For most everyday use cases, agent browsers still don’t provide enough value to justify their current risk profile,” McCarthy told TechCrunch. “Even though that access is what makes them powerful, given their access to sensitive data such as email and payment information, the risks are high. That balance will evolve, but the trade-offs are still very real today.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAlphabet to acquire Intersect Power to avoid energy grid bottlenecks
Next Article Defining proactive cloud security with new layers of defense
user
  • Website

Related Posts

Alphabet to acquire Intersect Power to avoid energy grid bottlenecks

December 22, 2025

Trump administration suspends 6GW of offshore wind leases again

December 22, 2025

Paramount renews bid for Warner Bros., secures $40 billion backing from Larry Ellison

December 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

The dynamic duo that forms the power grid

Critical flaw in n8n (CVSS 9.9) allows arbitrary code execution across thousands of instances

FCC bans foreign-made drones and key components due to US national security risks

Defining proactive cloud security with new layers of defense

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.