Post
I Gave Meta a Second Chance. They Blew It
March 7, 2026 · 14 minute read
The glasses I used to wear every day are now a liability.
Part 3 of my series on dark patterns and digital exploitation. If you haven't read The Slow Decay: How Dark Patterns Are Ruining Everything or Your Data Is Not Your Own, start there. This one gets personal.
I was reading through my morning news — the usual rotation of blogs and tech sites I hit before the coffee kicks in — when a headline stopped me cold. Swedish journalists had uncovered that footage from Meta's Ray-Ban smart glasses was being routed to a data annotation firm in Nairobi, Kenya, where human contractors were reviewing and labeling the video to train Meta's AI models.
Not aggregate data.
Not anonymized metadata.
The actual footage.
Workers described watching people use the bathroom, get undressed, have sex, and handle credit cards — all captured by glasses that the people in the videos often didn't even know were recording.
Reading this immediately made me feel violated, then embarrassed because I'm sure some of those situations applied to me at some point. Then I became angry for a bit.
My first coherent thought after the initial wave of anger wasn't about the contractors, or the Swedish journalists, or even the people in those videos. It was about myself. Because I'd let my guard down. I'd given Meta another shot — specifically because they said the glasses were built for privacy — and I was furious at myself for believing them.
The Second Chance I Shouldn't Have Given
I need to rewind, because the context here matters.
I deleted Facebook years ago. Snapchat too. I wrote about this in my dark design post — I made an intentional choice to walk away from Meta's ecosystem because I didn't trust the company with my data or my attention. That wasn't a casual decision, it was a values-based one, and I felt good about it. Heck, I battled that I allowed myself to keep Instagram just so I could stay in contact with some old friends but at least I stopped posting to it, a justification that's still a struggle for me.
Then the Ray-Ban smart glasses caught my eye, and I talked myself into giving Meta one more try. I wrote about this too — in my four-month review I literally said: "Ok, I'm game Meta, I'll give you another try, but only for this (sorry I'm not sorry Facebook)."
The pitch worked on me because it was specific. Hands-free music. POV recording. Quick AI queries through John Cena's voice (which I still find hilarious, for what it's worth). And importantly — privacy features that seemed genuine. I praised the recording LED in that review. I wrote about how I appreciated that the glasses have a light on the front that pulses when recording, that you can't block it or the glasses stop recording, and that I could explain this feature to people so there'd be "no creepy vibes." I said it was "a win for glasses with a camera."
Reading that back now makes my stomach turn.
What Actually Happened
The investigation was published on February 27th by two Swedish newspapers — Svenska Dagbladet and Göteborgs-Posten — and the details are worse than the headline suggests.
When you use Meta AI on the glasses — saying "Hey Meta" to ask a question, identify something you're looking at, or use the Live AI feature — that footage gets sent to Meta's servers. From there, it's routed to a company called Sama, a data annotation firm headquartered in California with operations in Nairobi. Contractors at Sama are tasked with reviewing, labeling, and annotating the video so Meta's AI models can learn to better interpret real-world scenes and conversations.
The contractors described what they see in that footage. One worker told reporters about watching a man place his glasses on a bedside table and leave the room, only for his wife to walk in and undress — completely unaware she was being recorded, let alone that a stranger in another country would be watching. Others reported seeing credit card numbers, sexual activity, private conversations about relationships and politics, and people using the bathroom. One contractor put it simply: "You think that if they knew about the extent of the data collection, no one would dare to use the glasses."
And here's the part that really got me: you can't use the AI features without sharing this data with Meta. There's no opt-out. If you want the "Hey Meta" functionality — the thing I used daily to ask John Cena about the Punic Wars and translate German words and identify plants on dog walks — you're feeding footage into this pipeline. Period. Meta's privacy policy doesn't specifically mention human contractors reviewing your footage, though it states the data can be used for "training purposes." The company took two months to respond to the Swedish journalists' interview requests, then referred them to the terms of service.
Meta claimed they use AI to blur faces in the footage before human review. The contractors said the blurring doesn't consistently work. Meta said data is "first filtered to protect people's privacy." The workers described seeing faces, credit cards, and explicit content without any apparent filtering.
This is the company that marketed these glasses with the phrases "designed for privacy, controlled by you" and "built for your privacy."
A class action lawsuit was filed on March 5th in federal court in San Francisco, alleging false advertising and privacy law violations. The UK's Information Commissioner's Office has opened an investigation, and the Electronic Privacy Information Center has petitioned California's privacy agency to investigate Meta's glasses under the state's biometric information protections.
Over seven million people bought these glasses in 2025. I was one of them. I believed the marketing. I praised the privacy features in a published review, and the whole time, the footage was being funneled into a data pipeline where underpaid contractors in Nairobi were watching strangers' most intimate moments to make Meta's AI a little bit smarter.
It Gets Worse
As if the Swedish investigation wasn't enough, here's what else has been happening while I was wearing these things and trusting the LED.
In February, The New York Times reported that Meta is planning to add facial recognition to the glasses — a feature internally called "Name Tag" — that would let wearers identify people around them and pull up information about them through Meta AI. The feature could recognize people in your Meta social graph (Facebook friends, Instagram followers) and potentially even people with public accounts who you don't know at all.
Meta actually considered adding facial recognition to the first-gen glasses back in 2021 and dropped the plans over — wait for it — ethical concerns. They even retired Facebook's photo-tagging facial recognition system that same year, citing "growing societal concerns" but apparently those concerns have an expiration date, because they're back at it, and the internal reasoning is genuinely chilling. An internal Reality Labs memo reportedly justified the timing by noting that U.S. political instability would keep civil society groups distracted, creating a launch window for the controversial feature. They are literally timing the rollout of mass facial recognition to coincide with when they think the people who would object are too busy fighting other fires to notice.
This is the exact kind of calculated, cynical exploitation I've been writing about in this series. It's a dark pattern at the corporate strategy level — not a misleading checkbox or a buried privacy setting, but a company deliberately choosing the moment when public resistance is weakest to push through something it knows is harmful.
Mark Zuckerberg reportedly questioned whether the glasses should even keep the LED indicator on during what they're calling "super sensing" — a mode where the cameras and microphones run continuously in the background for hours, not just when you're actively recording. The same LED I praised in my review. The one I said made me feel comfortable. The one I told other people about to reassure them. Zuckerberg is wondering if it should just... stay off.
Remember — two Harvard students already demonstrated in 2024 that the current glasses, paired with publicly available facial recognition software, could identify strangers on the Boston subway and pull up their phone numbers and home addresses. Meta's response at the time was to point to the LED, the LED that its own CEO is now questioning the necessity of, the LED that doesn't matter anyway because the footage is being watched by humans in Kenya regardless.
What I've Done About It
I stopped wearing them as daily glasses the morning I read the Swedish investigation, that was immediate and non-negotiable. They went from something I put on when I woke up to something sitting in a drawer.
Since then, I've turned off Meta AI entirely. I've been more careful about what I point them at even when I do use them, and I've seriously considered getting rid of them altogether — the same way I got rid of Facebook — because at what point does "limiting my use" become just another form of rationalizing a relationship with a company that keeps proving it can't be trusted?
What they've become, functionally, is a GoPro. I'll put them on for a specific, calculated purpose — recording a POV video of something I want to capture, maybe listening to music on a dog walk — and then they come off. The idea of wearing them casually around the house, asking John Cena random questions while my wife walks through the background, using "Look and Ask" to identify a plant while my neighbors are in the frame — that's done. The trust required for casual, ambient use is gone, and I don't see how Meta earns it back.
This is now the second time I've had to walk away from a Meta product because the company's behavior made it impossible to use in good conscience. I wrote in my original review that I was giving them another try "but only for this." Turns out "this" was too much to ask for.
The Pattern You Should Recognize
If you've been reading this series, none of this should surprise you.
This is the cycle I described in Part 1:
- Create something genuinely useful
- Build a user base by being good
- Monetize through data extraction
- Optimize for growth instead of trust
- Degrade the user experience until people can't leave — or don't realize they should
The glasses are useful. I said that in my review and I still believe it — the hardware is good, the music is great, the POV recording is legitimately handy. That's step one and twout the surveillance pipeline running underneath those features — the Kenyan contractors watching intimate footage, the facial recognition plans timed for maximum political cover, the "designed for privacy" marketing layered over an architecture that requires data sharing with no opt-out — that's steps three through five, playing out in real time on my face.
And the dark pattern here isn't a misleading button or a pre-checked box. It's the entire product framing. Meta sold me a pair of glasses. What I was actually buying was a data collection device that happens to also play music and take pictures. The useful features are the bait. The data pipeline is the product.
In Part 2, I wrote about how data brokers skip the dark pattern entirely — no deceptive UI, no misleading opt-in, they just take your data and sell it. Meta's approach is arguably worse, because they do use the deceptive UI. They do use the misleading marketing. They build trust through privacy messaging and then violate it behind the scenes. At least data brokers are honest about the fact that they don't care about you. Meta pretends to.
What Bothers Me Most
I keep coming back to the contractors. Not just the privacy violation — which is massive — but the working conditions. These are people in Nairobi, in offices with cameras everywhere, who aren't allowed to bring their own phones, watching footage of strangers in their most private moments because Meta outsourced the labor to a country where it's cheap. One contractor told reporters: "When you see these videos, it feels that way. But since it is a job, you have to do it."
Meta's $380 billion valuation. Seven million glasses sold, and the people doing the actual work of making the AI smarter — watching your bathroom, your bedroom, your credit card — are contractors in a developing country who feel they can't question the assignment because they need the job. That's not just a privacy problem. That's an exploitation pipeline that runs from your living room to a fluorescent-lit office in Nairobi, and Meta sits in the middle collecting the value from both ends.
This is the same dynamic I flagged in Part 2 when I wrote about the data broker industry. The harm flows downhill. The profits flow up, and the people at both ends — the users whose data is being extracted and the workers whose labor is being extracted — have the least power to change the system.
Your Choices Are Still Your Greatest Weapon
I've said it in every part of this series and I'll say it again: your choices matter.
I chose to stop wearing the glasses daily. I chose to turn off Meta AI. I'm choosing right now whether to get rid of them entirely or keep them in this limited GoPro capacity — and honestly, writing this is pushing me toward the former. Because every time I put them on, even for a "calculated" use, I'm still a customer. I'm still in the ecosystem. I'm still giving Meta the signal that this is acceptable.
If you own these glasses, I'm not going to tell you to throw them away. That's your call, but I'd encourage you to understand what you're actually agreeing to when you use the AI features — because Meta's marketing didn't make that clear, and their privacy policy was deliberately vague about the human review pipeline. Read the Swedish investigation. Read the class action complaint. Make an informed decision rather than the one Meta's marketing team designed for you.
If you don't own them but were considering it — consider what you now know. The hardware is good. The features are real, and the company behind them has demonstrated, repeatedly, across multiple products, across multiple years, that it will say whatever it needs to say about privacy while doing something very different behind the curtain.
If you did read that post and it influenced you to give them a try, I apologize to you. I was wrong in my belief and understanding but have since realized and am choosing differently.
Where This Leaves Me
I'm angry. Not just at Meta — at myself, for believing the pitch a second time. There's a saying about fool me once, and I walked right into the sequel.
I'm also doing what I always do when something makes me angry: I'm writing about it, I'm being honest about my own part in it, and I'm making changes. The glasses are off. The AI is disabled, and this post exists so that anyone reading my original review knows where I stand now.
Meta had their chance. They had it with Facebook, and they blew it. They had it again with the glasses, and they blew it worse — because this time they looked me in the eye (through my own lenses, apparently) and told me they'd changed.
They hadn't.
I gave them a second chance. They gave my footage to a stranger in Nairobi. Lesson learned.
If you want to know when I post something new, drop your email below. No spam — just a heads up when there's a new post.