Meta is facing a new lawsuit over its AI smart glasses and their lack of privacy after an investigation by a Swedish newspaper found that employees of a Kenyan-based subcontractor were reviewing footage of customers’ glasses. It included sensitive content such as nudity, sex, and using the toilet.
Meta claimed it blurred faces in the images, but sources disputed that the blurring worked consistently, the report noted. The news prompted the UK regulator, the Information Commissioner’s Office, to investigate the matter.
Now, the tech giant is also facing a lawsuit in the United States. In a new complaint, plaintiffs Gina Barton of New Jersey and Mateo Kanu of California, represented by the public interest law firm Clarkson, allege that Meta violated privacy laws and engaged in false advertising.
The complaint alleges that Meta AI smart glasses are advertised with claims such as “designed with privacy in mind and controlled by you,” and “built with user privacy in mind,” which could deceive customers into thinking that what they see in the glasses, including intimate moments, is being viewed by employees overseas. The plaintiffs said they believed Meta’s marketing and could not find any disclaimers or information that contradicted the advertised privacy protections.
The lawsuit accuses Mehta and its eyewear manufacturing partner, Luxottica, of violating consumer protection laws. Mehta has not commented on the lawsuit at this time.
The Clarkson law firm has litigated other large-scale lawsuits over the years against tech giants like Apple, Google and Open AI, pointing to the scale of the problem at hand. In 2025, more than 7 million people purchased Meta smart glasses. This means their footage is fed into a data pipeline for review and they cannot opt out.
Meta told the BBC that when people share content with Meta AI, the company uses contractors to review information to improve people’s glasses experience. This is explained in the company’s privacy policy, and it pointed to the supplementary MetaPlatform Terms of Use, without specifying where this is written. However, the news outlet found that Meta’s UK AI terms of use contained language about human reviews.
tech crunch event
San Francisco, California
|
October 13-15, 2026
The version of that policy that applies to the U.S. states: “In some cases, Meta reviews your interactions with the AI, including the content of your conversations with and messages to the AI. This review may be automated or manual (human).”

The complaint primarily points out how the glasses were sold, giving examples of ads touting their privacy benefits and explaining privacy settings and “additional layers of security.”
“You are in control of your data and content,” one ad reads, explaining that smart glasses owners will have to choose what content they share with others.
The rise of “high-end surveillance” technologies such as smart glasses and always-listening AI pendants has sparked widespread backlash. A developer has released an app that can detect when smart glasses are nearby.
Meta has not commented on the lawsuit itself since it has just been filed.
However, spokesperson Christopher Sgro issued the following statement on the whole issue, saying, “Ray-Ban Meta glasses allow you to use AI hands-free to answer questions about the world around you. The media you capture remains on your device unless you choose to share it with Meta or other users. When you share content, like many other companies, we may use contractors to review this data for the purpose of improving people’s experience. We take steps to filter this data to help protect people’s privacy and prevent personally identifiable information from being reviewed. ”
Meta statement updated after publication.
Source link
