Meta Contractors Viewed Intimate Footage from Ray‑Ban Smart Glasses, Investigation Finds
Swedish investigative reporting published this week found offshore contractors for Meta were asked to review intimate and sometimes disturbing video captured by AI‑enabled Ray‑Ban smart glasses. The footage, reviewed by workers based in Kenya under the contractor Sama, included bathroom recordings, nudity, and footage containing sensitive personal information. Meta’s terms reserve the company’s right to send interactions with its AI services to human reviewers — a practice the company cited when contacted. The disclosures arrive as the glasses, first launched in 2023 and upgraded with AI features this September, have seen sales surge, with over 7 million units sold after sales tripled in 2025.
Key Takeaways
- Investigative outlets Svenska Dagbladet and Göteborgs‑Posten reported that Kenyan contractors were asked to label intimate and disturbing video captured by Meta Ray‑Ban smart glasses.
- Reported content included bathroom footage, nudity, sexual content and images showing personal data such as bank accounts, according to worker accounts.
- The work was performed by staff under Sama, a contractor previously named in a class action by content moderators alleging poor conditions.
- Meta’s Terms of Service states it may route interactions with its AI services, including always‑on features, to human moderators for review.
- Meta launched the Ray‑Ban smart glasses collaboration in 2023 and released an upgraded AI‑powered model in September; CNBC reported sales tripled in 2025 to more than 7 million units.
- Privacy and civil‑liberties advocates warn the device’s always‑on camera and prospective facial recognition raise broader surveillance and consent risks.
Background
Meta partnered with Ray‑Ban to introduce camera‑equipped smart glasses in 2023, positioned as a wearable that integrates social features and AI. The product line evolved into an AI‑enhanced Meta Ray‑Ban Display model released in September with a Neural Band interface and promoted assistant integrations. Those AI features require training data and, in many commercial AI pipelines, human annotators review raw footage to label objects, actions and contextual details so models can learn to recognise them.
Data‑labeling contractors such as Sama supply large pools of human reviewers who annotate audio and video for tech companies; Sama has previously faced litigation and scrutiny related to conditions for content moderators. Meanwhile, the glasses’ popularity accelerated: CNBC reported unit sales tripled in 2025, exceeding 7 million devices, increasing the volume of recorded content entering Meta’s systems. That growth has intensified scrutiny from privacy groups and regulators worried about consent, body‑camera‑style recording in public, and potential downstream use of visual data by law enforcement or third parties.
Main Event
The investigation by Svenska Dagbladet and Göteborgs‑Posten collected testimony from Kenyan workers who said they were instructed to review and label videos captured by Meta’s Ray‑Ban glasses. Workers described reviewing intimate moments that, they said, appeared to have been recorded when subjects were unaware. Examples reported included footage shot in bathrooms, sexually explicit material, and images containing banking details.
Those workers were reportedly employed through Sama, a firm that has previously been at the center of complaints from content moderators alleging exposure to traumatic material and inadequate support. Employees told the Swedish outlets they were expected to complete labeling tasks without questioning the source or nature of the footage, and that asking questions risked job loss. The accounts suggest human reviewers handled content originating from consumer devices designed to record everyday life.
When asked for comment, Meta pointed to its Terms of Service, which reserves the right to route interactions with its AI services to human moderators. The company has previously stated it is developing live AI features and potential facial recognition capabilities for future device updates, which would extend persistent sensing and on‑device analysis. The investigators’ reporting adds a new dimension to ongoing concerns about how wearable cameras feed training datasets used by large AI systems.
Analysis & Implications
The revelation that intimate, potentially nonconsensual footage from consumer wearables is being labeled by human contractors raises immediate privacy and consent questions. If videos captured in private settings are reviewed offsite, individuals may be unaware that intimate moments enter corporate data pipelines. That gap between user expectations and back‑end practices could prompt legal challenges under privacy laws and consumer‑protection statutes in multiple jurisdictions.
For Meta, the findings pose a reputational risk and may accelerate regulatory scrutiny. The company has faced criticism previously over facial recognition and data‑handling practices; adding evidence that sensitive wearable footage is included in human‑review workflows may trigger inquiries from privacy regulators and lawmakers, and could influence pending or future legislation on biometric data and always‑on sensors.
Operationally, reliance on offshore human labeling raises labor and welfare questions. Prior litigation against Sama indicates systemic concerns about the conditions under which traumatic or sensitive content is reviewed. Companies that contract this work will face pressure to demonstrate robust safeguards for reviewers, stricter filtering before human exposure, and clearer user consent mechanisms governing what data may be routed to human teams.
Comparison & Data
| Year | Event | Significant Figure |
|---|---|---|
| 2023 | Meta–Ray‑Ban smart glasses launch | Initial product release |
| September | Upgraded AI‑powered Ray‑Ban Display released | Neural Band interface introduced |
| 2025 | Sales surge reported by CNBC | More than 7 million units sold; sales tripled |
The table highlights the product timeline and public milestones relevant to the reporting. Sales growth in 2025 expanded the installed base of always‑capable recording devices, increasing the volume of footage that could be used to train or evaluate AI systems. That growth magnifies downstream privacy and labor issues linked to data‑labeling practices and content moderation workflows.
Reactions & Quotes
The following short excerpts and context summarise public statements and worker testimony collected by the investigation.
“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work.”
Employee interviewed by Svenska Dagbladet / Göteborgs‑Posten
Workers used that line to describe the tension between recognising the sensitivity of footage and the expectation to continue labeling without objection.
“Meta reserves the right to send users’ interactions with its AI services, including its always‑on live AI features, to human moderators.”
Meta Terms of Service (company policy)
Meta cited this clause when contacted; it defines a policy basis for routing certain AI interactions to human review teams, according to the investigators.
“An investigation found offshore Meta workers in Kenya were asked to analyze intimate and even ‘disturbing’ videos taken by glasses wearers.”
Svenska Dagbladet & Göteborgs‑Posten (investigative report)
The Swedish outlets reported on worker testimony, examples of sensitive footage, and the contractor relationship; their reporting is the primary source for the allegations described here.
Unconfirmed
- It is not independently verified how many and which specific clips of private or nonconsensual nature were reviewed; the investigation relied on worker testimony and document review.
- The extent to which labeled footage from Ray‑Ban devices was used to train Meta’s models has not been independently audited or publicly disclosed by Meta.
- Claims that facial recognition systems tied to the glasses will be deployed broadly remain plans and proposals rather than confirmed national‑level implementations.
Bottom Line
The reporting exposes a tension at the intersection of consumer wearables, AI training needs and worker protections: popular smart glasses produce large volumes of intimate visual data that may be funneled into human‑review workflows. That linkage creates privacy risks for people recorded, and welfare risks for annotators asked to process sensitive material.
For regulators and companies, the finding underscores the need for clearer consent flows, stronger technical filtering to reduce human exposure to intimate content, transparent disclosures about what footage may be reviewed, and contractual safeguards for labeling workers. Consumers and policymakers should watch for follow‑up audits, corporate clarifications and any regulatory responses in the months ahead.
Sources
- Mashable — news report summarising Swedish investigation (media)
- Svenska Dagbladet — investigative reporting (investigative newspaper)
- Göteborgs‑Posten — investigative reporting (investigative newspaper)
- CNBC — sales reporting on Ray‑Ban devices (business news)
- Sama — contractor information (company site)
- Meta Terms of Service — official company policy (corporate)