Post

Stand against Dark Design

2026-03-04T03:24:58.106715+00:00

The Slow Decay: How Dark Patterns Are Ruining Everything (And What We Can Actually Do About It)

We need to talk about what's happening to the internet—and to every digital tool we've come to rely on.

It started with a conversation about Jonah Hill movies. Specifically, whether his character in Get Him to the Greek was the same character from the movie Forgetting Sarah Marshall. Turns out that there is not an intentional correlation between the two, even though we got Aldous Snow as a character, and a cameo from Sarah Marshall in Get Him to the Greek. A small missed opportunity for deeper world-building, sure, but it led to a bigger realization: everything is getting worse because companies have stopped caring about depth, quality, and user experience in favor of short-term profit optimization.

Strange to have this trigger a correlation, but that’s just how my mind works. I don’t necessarily believe this was uninetntionally intentional for Get Him to the Greek, but the more I thought about it I realized it's not just movies. It's everywhere.

ChatGPT Just Crossed a Line We Should All Care About

In February 2026, OpenAI started testing advertisements in ChatGPT. Not banner ads on the side. Not a sponsored section you can ignore. Ads integrated into the conversation experience itself, matched to what you're discussing right now, optimized based on your chat history and how you've interacted with previous ads.

The official line: "Ads appear below responses, clearly labeled as sponsored, and don't influence the actual answers."

The reality: When advertising becomes a major revenue stream (OpenAI is projecting $25 billion by 2029), the incentive structure changes completely. Internal discussions already mention giving sponsored results "preferential treatment"—meaning the answers themselves could prioritize advertiser content.

This isn't just annoying. It's the corruption of a tool that was supposed to be neutral.

And if you think "well, I'll just pay for the premium version and avoid ads," remember: every platform that introduces ads eventually degrades the free tier to force that exact choice. It's not about giving you options. It's about extracting maximum revenue from every user, one way or another.

The Pattern Is Everywhere

This isn't new. We've watched this exact trajectory play out over and over:

The Cycle:

  1. Create something genuinely useful
  2. Build a user base by being good
  3. Monetize through ads or aggressive upselling
  4. Optimize for engagement and revenue instead of quality
  5. Degrade the user experience until people can't leave (because of lock-in, network effects, or lack of alternatives)

We've seen it with:

  • Google Search - Once a clean, reliable research tool. Now the first page is mostly ads, SEO spam, and AI-generated garbage. Actual useful results are buried.
  • YouTube - Went from occasional skippable ads to multiple unskippable ads per video, mid-roll interruptions, and aggressive Premium upselling.
  • Facebook - Traded chronological feeds for algorithmic manipulation designed to maximize engagement (read: outrage and addiction).
  • Instagram - Same story. Feed algorithm prioritizes sponsored content and "suggested posts" over people you actually follow.
  • Twitter/X - Pay for verification or get buried. Pay for reach. Ads everywhere. The platform is now optimized for revenue extraction, not communication.
  • Netflix - Raised prices repeatedly while canceling shows, fragmenting content across platforms, and introducing ads to a service people paid for specifically to avoid ads.
  • Amazon - Search results are now 50% sponsored products. Reviews are gamed. The entire shopping experience is optimized to confuse you into buying what makes Amazon the most money, not what's best for you.

And now: ChatGPT and AI research tools.

The tools we were supposed to trust for information are being commercialized in real-time. And once that door opens, it doesn't close.

What Dark Patterns Actually Are (And Why You Should Be Furious)

Dark patterns are design choices that manipulate you into doing things that benefit the company at your expense. They're not accidents. They're deliberate, researched, A/B tested strategies to extract more money, data, or engagement from you.

Examples you've definitely encountered:

1. Subscription Traps

  • Free trial that requires a credit card and auto-renews at full price
  • "Cancel anytime!" but the cancellation process requires calling customer service during business hours, navigating six menus, and talking to a retention specialist whose job is to talk you out of it
  • Hiding the "cancel subscription" button or making it deliberately hard to find

2. Fake Urgency

  • "Only 2 left in stock!" (artificial scarcity to pressure impulse purchases)
  • Countdown timers on sales that reset when the timer hits zero
  • "Other people are looking at this right now!" (manufactured FOMO)

3. Pre-Checked Boxes and Hidden Costs

  • Opt-out instead of opt-in for data sharing, email lists, or add-ons
  • "Trip insurance" or "priority boarding" pre-selected in checkout
  • Fees that only appear on the final payment screen

4. Confusing Pricing

  • Showing you a monthly price but charging annually
  • "Save 40%!" compared to a inflated "regular price" that was never real
  • Free tier with intentionally crippled features to force upgrades

5. Infinite Scroll and Autoplay

  • Designed to keep you engaged longer than you intended
  • No natural stopping point, just endless content optimized to hold your attention
  • Algorithms that learn what keeps you specifically hooked

6. Privacy Settings Buried and Reset

  • Default settings that share maximum data
  • Privacy controls hidden in nested menus
  • Settings that reset to defaults after updates

7. Guilt and Shame Tactics

  • "Are you sure you want to unsubscribe? You'll lose access to..."
  • Exit popups: "Wait! Don't you want to save money?"
  • Confirmation buttons like "No thanks, I don't want to save 50%" (making you click the negative option)

These aren't just annoying. They're manipulative, disrespectful, and in many cases, harmful.

They exploit cognitive biases, take advantage of busy or distracted users, prioritize extraction over experience, and they're legal, which is part of the problem.

Why This Matters More Than You Think

"It's just ads. Everyone has ads. Just ignore them."

No. Here's why this is bigger than annoyance:

1. Trust Erosion

When research tools, news platforms, and information sources are compromised by commercial interests, you can't trust what you're seeing. Is this the best answer, or the answer that makes someone money? You shouldn't have to ask that question every time you search for information.

2. Cognitive Load

Every manipulative design choice adds friction to your day. Dismissing popups,declining upsells, unchecking boxes, navigating purposefully confusing menus, it's exhausting, and it's by design. They're hoping you'll give up and just click yes.

3. Normalization

When dark patterns are everywhere, we stop noticing them. We accept that everything is trying to manipulate us. That's not normal and that's not okay, and most importantly, we shouldn't let it become the default.

4. Accessibility and Equity

Dark patterns disproportionately harm people who are less tech-savvy, have cognitive disabilities, are non-native speakers, or are just busy and distracted. It's predatory.

5. Cultural Degradation

When profit optimization trumps quality, we lose depth, nuance, and craftsmanship. Movies become formulaic. Music becomes four chords and 4/4 timing. AI tools become ad platforms. Everything flattens into the lowest common denominator because that's what's safe and profitable.

We're being asked to accept that everything good will eventually be ruined by commercialization. And I, for one, refuse.

The Companies Doing It Right (And Why We Should Support Them)

Not everyone is racing to the bottom. Some companies are building with integrity, prioritizing user experience, and proving that ethical business models can work.

Nielsen Norman Group (NNg)

The gold standard for human-centered design and usability research. They've been advocating against dark patterns for decades, publishing research on ethical design practices, and training designers to prioritize users over manipulation. If you're in UX, product, or design, their work should be required reading.

Anthropic (Claude's creators)

Full disclosure: I use Claude for a reason. Anthropic has consistently prioritized responsible AI development, transparency, and user safety over growth-at-all-costs. They've resisted the urge to shove ads into every interaction. They publish their research openly. They engage with the AI safety community honestly. They also, quite literally, stood up to the Pentagon — and it cost them.

Here's what happened: Anthropic had a $200 million contract with the Department of Defense, and Claude was the only AI model authorized to operate on the military's classified networks. The contract included two restrictions Anthropic insisted on — no mass surveillance of American citizens, and no fully autonomous weapons. Those restrictions, by Anthropic's account, had never actually interfered with a single government mission. Defense Secretary Pete Hegseth didn't care. He wanted Claude available for "all lawful purposes" with no exceptions, gave Anthropic CEO Dario Amodei a Friday 5:01 PM deadline to comply, and threatened to invoke the Defense Production Act — a 1950 wartime law — to force compliance. Amodei's response was straightforward: "We cannot in good conscience accede to their request."

When the deadline passed, things escalated fast. Trump ordered all federal agencies to phase out Anthropic's products within six months, Hegseth designated Anthropic a "supply chain risk to national security" — a label that had never before been applied to an American company and is typically reserved for entities tied to foreign adversaries — and within hours, OpenAI swooped in and announced it had secured the contract Anthropic just lost. Sam Altman later admitted the timing "looked opportunistic and sloppy," and legal experts are already questioning whether the government followed its own rules in making the designation, noting it typically requires a completed risk assessment and congressional notification that don't appear to have happened.

Anthropic absorbed a $200 million contract loss, a supply chain risk designation that could ripple through its entire enterprise customer base, and potentially catastrophic business consequences — rather than remove two guardrails it believes matter. In a field where every other major AI company agreed to let their models be used without restriction, Anthropic said no. That's not nothing, and it's a big part of why I use Claude.

Could they change course? Of course. That's why it's important to support them now, while they're doing things right, and hold them accountable if they start sliding into dark patterns. That’s why I’m waging heavily right now whether or not to blacklist OpenAI.

Truly, dark design extends into the very fabric of our society in ways that are hard to imagine or see.

Human-Centered Design Advocates

Organizations and individuals fighting for ethical design, accessible interfaces, and user rights. The people pushing for regulations against dark patterns. The designers refusing to implement manipulative features. The researchers documenting harm and proposing alternatives.

These are the good guys. Support them. Share their work. Hold up their examples when companies claim "everyone does it this way."

What You Can Actually Do

I'm not going to prescribe a single solution because this problem is massive and multi-faceted. But your choices matter more than you think.

Here's what I do, and what you might consider:

1. Vote with Your Wallet (And Your Attention)

  • Use tools and services from companies that respect you
  • Pay for products that don't rely on ads or data harvesting (when you can afford it)
  • Cancel subscriptions from companies that manipulate you
  • Leave honest reviews warning others about dark patterns

My choices:

  • I use Claude instead of ChatGPT
  • I deleted Facebook and Snapchat years ago (all Social Media is terribly sus these days)
  • I keep my digital footprint minimal and intentional
  • I support companies that prioritize ethics over growth

2. Demand Better (And Actually Mean It)

  • Complain publicly when companies introduce manipulative features
  • Contact customer service and explain why you're canceling
  • Use the thumbs-down buttons, feedback forms, and complaint channels
  • Make noise on social media, in reviews, and in forums

Companies track negative sentiment. If enough people push back, they notice.

3. Learn to Recognize and Resist Dark Patterns

  • Before clicking "I agree," read what you're agreeing to
  • Check boxes before submitting forms
  • Use browser extensions that block ads and tracking
  • Teach others (especially older family members or less tech-savvy friends) how to spot manipulation

4. Support Regulation and Advocacy

  • Advocate for laws that ban the worst dark patterns
  • Support organizations fighting for digital rights and ethical design
  • Vote for politicians who understand tech policy (or at least try to learn)

5. Build Alternatives

If you're a designer, developer, or creator: build things that don't suck.

Prove that ethical design can work. Show that you don't need to manipulate users to build a successful product. Be the example that others can point to when they're told "this is just how it's done."

Why I'm Writing This

I'm not a digital rights activist or a policy expert. I'm just someone who's tired of watching good things get ruined by short-term profit thinking.

I'm tired of tools I trusted being corrupted by ads.

I'm tired of every app, platform, and service trying to manipulate me into doing what's best for them instead of what's best for me.

I'm tired of the normalization of dark patterns, the race to the bottom, and the cultural flattening that happens when everything is optimized for engagement metrics.

And I refuse to accept that this is just how things have to be.

The companies doing this are counting on your apathy. They're betting that you'll complain for a day and then keep using their products because the alternatives are inconvenient or because you're locked in by network effects.

Prove them wrong.

The Question I Want You to Ask

I'm not telling you to delete all your accounts, go off-grid, or boycott every company that's ever shown you an ad.

I'm asking you to think critically about what you're accepting as normal.

Ask yourself:

  • Is this design choice helping me, or manipulating me?
  • Am I using this product because it's good, or because I'm locked in?
  • Would I recommend this service to someone I care about?
  • If a company is willing to manipulate me in small ways, what does that say about their values?
  • What am I supporting with my time, attention, and money?

And then ask the bigger question:

  • What kind of digital world do I want to live in?

Because the one we're getting is built by the choices we make—or don't make—every single day.

Final Thoughts

This isn't about being perfect. I still use services that annoy me. I still encounter dark patterns daily. I'm not living off-grid with a Faraday cage around my router.

But I'm intentional about my choices. I pay attention to which companies respect me and which ones don't. I switch when better alternatives exist. I speak up when something crosses a line.

Your choices are your greatest weapon. Use them.

Support the companies doing it right. Push back against the ones doing it wrong. Demand better. Expect better. And don't let anyone convince you that "this is just how things are."

It's only "how things are" if we keep accepting it.


If you're a designer, developer, or company building ethical products—keep going. We see you. We appreciate you. And we're rooting for you.

If you're a user frustrated by the same things I am—you're not alone. And your voice matters more than you think.

Let's build a better internet. One intentional choice at a time.

If you want to know when I post something new, drop your email below. No spam — just a heads up when there's a new post.