Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Iran’s Infy APT resurfaces with new malware activity after years of silence

‘It felt so wrong’: Colin Angle on iRobot, the FTC, and the Amazon deal that never was

New York Governor Kathy Hochul signs RAISE Act regulating AI safety

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Popular AI chatbot has alarming encryption flaws, meaning hackers could easily intercept messages
Science

Popular AI chatbot has alarming encryption flaws, meaning hackers could easily intercept messages

userBy userNovember 26, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Cybersecurity researchers at Microsoft have identified a critical flaw in modern artificial intelligence (AI) systems. This means that your conversation with the chatbot could have been intercepted by a hacker attack. This bypasses the encryption used to keep your chats private.

This attack technique, known as Whisper Leak, is a type of “man-in-the-middle” attack that allows hackers to intercept messages as they travel between servers. This worked because the hacker was able to read the message’s metadata and infer its content.

The researchers outlined the attack in a study uploaded to the preprint arXiv database on November 5th. They notified large-scale language model (LLM) providers in June 2025. Some providers, such as Microsoft and ChatGPT developer OpenAI, responded by assessing the risk and deploying fixes. However, other LLM providers refused to implement the amendment citing various reasons. Some people didn’t even respond to the new findings, the researchers said, but they declined to highlight which specific platforms failed to apply the fix.

you may like

“I’m not surprised,” cybersecurity analyst Dave Lear told Live Science. “LLM is a potential treasure trove given the amount of information people put into it, not to mention the amount of medical data it could contain. Since hospitals are using LLM to classify test data, sooner or later someone is bound to find a way to extract that information.”

Discovered vulnerabilities in AI chatbots

Generative AI systems like Chat GPT are powerful AI tools that can generate responses based on a series of prompts, similar to those used by virtual assistants on smartphones. A subset of the LLM is trained on large amounts of data to generate text-based responses.

Conversations that users have with LLM are typically protected by Transport Layer Security (TLS), a type of encryption protocol that prevents communications from being read by eavesdroppers. However, researchers were able to intercept and infer the content through metadata of communications between users and chatbots.

Metadata is essentially data about data, including size and frequency, and can often be more valuable than the content of the message itself. Although the content of the messages between humans and LLM remained secure, researchers were able to infer the subject matter of the messages by intercepting them and analyzing the metadata.

Get the world’s most fascinating discoveries delivered straight to your inbox.

They accomplished this by analyzing the size of encrypted data packets (small formatted data units sent over the network) from LLM responses. The researchers were able to develop a set of attack techniques that reconstruct plausible sentences within the message without bypassing the encryption, based on timing, output, and token length sequences.

In many ways, the Whisper Leak attack leverages a more sophisticated version of the UK Investigatory Powers Act 2016’s internet monitoring policy. This policy infers message content based on sender, timing, size, and frequency without reading the content of the message itself.

“To put this in perspective, if a government agency or internet service provider monitors traffic to popular AI chatbots, they can reliably identify users asking questions about specific sensitive topics, such as money laundering, political dissent, or other targets, even though all traffic is encrypted,” security researchers Jonathan Bar Or and Geoff McDonald wrote in a blog post published by the Microsoft Defender Security Research Team.

There are various techniques available to LLM providers to reduce this risk. For example, random padding (adding random bytes to the message to thwart inference) may be added to the response field, increasing the length of the field and distorting the packet size, reducing predictability.

Whisper Leak’s core flaw is an architectural consequence of how LLM is deployed. Mitigating this vulnerability is not an insurmountable challenge, but the fix is ​​not universally implemented by all LLM providers, the researchers said.

The researchers said that until providers can address chatbot flaws, users should avoid discussing sensitive topics on untrusted networks and be aware of whether their providers have mitigation measures in place. Virtual private networks (VPNs) can also be used as an additional layer of protection, as they obfuscate a user’s identity and location.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleBug in jury system used in several US states exposes sensitive personal data
Next Article Mysterious 3.4-million-year-old foot has been assigned to a mysterious human relative who lived with Lucy
user
  • Website

Related Posts

This week’s science news: Japan’s laser weapons test bids farewell to Comet 3I/ATLAS, AI solves ‘impossible’ math problem

December 20, 2025

Interstellar comet 3I/ATLAS is rapidly moving away from us. Can we ‘intercept’ it before it leaves us forever?

December 19, 2025

Japan trials 100-kilowatt laser weapon – capable of cutting metal and drones while in flight

December 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Iran’s Infy APT resurfaces with new malware activity after years of silence

‘It felt so wrong’: Colin Angle on iRobot, the FTC, and the Amazon deal that never was

New York Governor Kathy Hochul signs RAISE Act regulating AI safety

US Department of Justice charges $54 for ATM jackpotting scheme using Ploutus malware

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.