Post
The Bear Wasn't Real. Most People Wouldn't Know.
2026-03-01T17:08:54.18682+00:00
An unbearable situation is at hand.
My brother-in-law sent me a video last week that has a man asleep in a camping chair with a fire crackling nearby, and it has the vantage point of a security camera. In the video, a black bear walks up and sniffs his face, lingers a second or two, and then bolts. The man barely reacts until it's over — then he seems to slowly piece together what just happened and jumps out of his chair in astonishment.
We share things that we find interesting all the time, and we've spent some time visiting his father who lives out in the middle of the woods in West Virginia, so a video of a bear creeping on a guy who fell asleep in a chair next to a campfire is something I know made him say Holy Smokes! (euphemistically).
My response: that's AI.
I was certain, and it took me -10s to identify it as such.
I want to talk about why — not to pat myself on the back, but because I think the gap between people who can spot this stuff and people who can't is becoming one of the more serious problems we're facing right now.
My bro was a bit taken aback, but also saw the forest through the trees (or bear in the forest?) immediately after I called it out.
Why I Could See It
I've been using Photoshop since the 90s (dates me a bit I suppose).
I've edited videos.
I'm an artist of multiple mediums.
I spent a decade in a top-rated print shop as a high-level digital press operator where I could spot color registration off by a single micron — not five or ten, one micron, and this was enough to earn me the nickname of Eagle eyes from our field service engineers.
That kind of trained eye doesn't go away. It just finds new things to look at.
So when I watched that bear video, here's what I actually saw — in a matter of seconds:
The camera placement didn't make sense. What security camera is mounted at that angle, capturing that exact framing of someone sleeping by a fire, a fire built what would've been dangerously close to a structure? Where is this camera supposed to be, and why?
The fire was wrong. I've spent a lot of time staring at fires. Real fire has a particular kind of chaos to it — the way it responds to air, the way it settles and surges. This fire had movement but it was performing movement and there was noticeable wind affecting the flames, but nothing else in the scene responded to wind. No dust, no tree movement, no hair.
Odd...
The bear's movement was off. Close — darn close, which is what makes these so dangerous — but not quite. Animal locomotion has weight and intention behind it that's hard to fake. The bear read like a very good approximation, but the physics were more like something closer to the weight of a dog, not a 400-500lb black bear.
The man's physics were wrong. When he shifted in the chair, something about the interaction between his body and the chair didn't land right, and when he suddenly stood and the chair shifted it just solidified that for me even more.
Humans are extraordinarily sensitive to subtle physical inconsistencies. It's why video game character movement took decades to feel even passably real (thanks Valve for Source and all the Havok), and still isn't quite there. We feel it before we can name it.
And then there's the meta-context. We live in a moment where AI-generated "security camera" animal encounter videos are a known, established content category.
A TikTok account called @globalhiking was cranking them out — bears, alligators, mountain lions, all in that same porch/security cam aesthetic — using OpenAI's Sora. The video my brother-in-law sent me was one of them. Fact-checkers confirmed it. The original even had a Sora watermark before someone stripped it and kept sharing, which is another piece of this already dangerous puzzle.
When you know what to look for, your guard goes up. That's not paranoia. That's pattern recognition. These videos can be considered as neat to some degree, but also as AI slop.
Oh my, It's Not Just Lions, Tigers, or Bears
Here's the thing — that bear video was just one of many I've had to call out over the past year. And not just to my brother-in-law. I've become the unofficial "is this AI?" helpline for my parents, my mother-in-law, and a bunch of friends.
The animals on trampolines were a big one. You've probably seen them — bunnies bouncing on a backyard trampoline, captured by what looks like a grainy night-vision security camera. One clip got over 148 million plays on TikTok in two days before most people even realized it was AI-generated. Then came the bears on trampolines, the raccoons, the deer. All the same vibe — "security cam" footage, nighttime, cute animals doing something implausible but not impossible enough to set off alarm bells for most people. A University of Córdoba research team actually published a paper in Conservation Biology about how these videos are distorting people's understanding of real animal behavior — especially kids'. That landed with me.
Then there are the product designs. My parents sent me some incredible-looking RV and camper van concepts — gorgeous renders of vehicles that looked like they were straight out of a design magazine. Futuristic interiors, solar panels, expandable living spaces. These look amazing, have you seen this? Yeah — they're AI-generated concept renders. Not real products. Not even close to production. Some had wheel that not only wouldn't support the vehicle's weight but wouldn't even allow the doors to open. The same thing happens with car concepts — one creator made an entire AI-generated auto show with Google's Veo 3 that included fake interviews with fake attendees about fake cars, and it was genuinely hard to tell it wasn't real footage.
Architecture is another one. AI-generated buildings circulate on Instagram and Facebook looking like the most stunning structures on Earth — impossible curves, perfect lighting, materials that don't actually exist. A friend shared one thinking it was a real building in Cairo. It was a Midjourney render by a designer experimenting with Mamluk architecture styles. Beautiful? Sure. Real? Not even a little bit. But the image was circulating with zero context, zero attribution, and zero indication that it was AI.
And then there's China's video generation tools — this is where things get really unsettling for me. ByteDance's Seedance 2.0, released just a few weeks ago, can generate realistic video of specific people — celebrities, politicians, anyone — in completely fabricated situations. Tom Cruise and Brad Pitt in a kung-fu fight. Trump in a bamboo grove. Kanye West dancing through a Chinese imperial palace singing in Mandarin. One Chinese tech blogger reported that Seedance generated a realistic clone of his voice from nothing but an image of his face. ByteDance rolled that feature back after the backlash, but the point was already made. Meanwhile, Kuaishou's Kling AI has over 22 million users who have generated more than 168 million video clips. These aren't niche tools anymore, they're content factories.
What I Actually Look For
Since I keep getting asked — here's the shorthand version of what I notice when something feels off:
Context and framing. We don't live in the golden-age of America's Funniest Home Videos anymore, so silly or amazing videos should be scrutinized. It's important to begin asking things like: Why is this camera here? Why is the framing this perfect? Why is the animal behaving this conveniently?
Real footage is messy and boring most of the time. AI-generated "caught on camera" content is suspiciously cinematic.
Physics that don't commit. Again, humans are well trained by evolution to see the natural world for what it is. Looking out for things like: Wind affecting one thing but not another. Water moving too smoothly. Fabric that doesn't respond to gravity the way it should. Shadows falling in directions that don't match the light source.
AI is getting better at this, but physics simulation is hard, and the inconsistencies are still there if you look closely.
Object permanence issues. This includes: Things that appear and disappear between frames. In the trampoline bunny videos, animals literally blink in and out of existence. In longer clips, background elements shift or morph slightly.
If you keep that skeptical eye, then your brain registers this as "something's wrong" even if you can't articulate what.
The uncanny smoothness. Real security camera footage has noise, compression artifacts, dropped frames, inconsistent exposure. AI-generated "security cam" footage is often too clean while trying to look dirty. It's performing lo-fi rather than actually being lo-fi.
Animal and human movement. This is the hardest one to articulate, but real bodies — animal and human — move with weight, hesitation, and micro-adjustments that come from having actual mass interacting with actual surfaces. AI approximates this but tends to produce movement that's too fluid or too consistent.
There's a reason it took video game developers decades to get character animation to feel passably real. AI has the same problem, just from a different direction.
Source and attribution. Where did this come from? Is it from a news outlet, a verified account, a nature documentary? Or is it from a faceless TikTok account with a generic name and 200 videos that all have the same aesthetic? This one's not about visual analysis at all — it's just basic media literacy, context is so very important, and this seems to be the one most people skip.
The Numbers Are Sobering
I don't want to turn this into a statistics dump, but some of the numbers I came across while thinking about this post that rattled me.
Human detection rates for high-quality deepfake video are around 24.5%. That means roughly three out of four people can't tell the difference. Deepfake files have grown from an estimated 500,000 in 2023 to a projected 8 million in 2025 — a 1,500% increase. Financial losses from deepfake-enabled fraud topped $200 million in the first quarter of 2025 alone, and Deloitte projects that generative AI fraud losses will hit $40 billion by 2027. One report found deepfake attacks happening at a rate of one every five minutes throughout 2024.
And here's the one that stuck with me: only 0.1% of participants in one study correctly identified fakes across all media types. Not 1%. Not 10%. Zero point one percent.
The gap between what AI can produce and what the average person can detect is widening — fast. And the tools are getting cheaper, easier, and more accessible every month. Research firm data shows that 95% of people can't distinguish high-quality AI-generated videos from authentic footage at this point. That's not a prediction. That's now.
The Anthropic Situation
The same week that bear video came across my phone, something much bigger was unfolding.
Anthropic — the company that makes Claude, the AI I use daily and genuinely respect — was in a standoff with the Pentagon. Some quick context: Anthropic signed a $200 million contract with the Department of Defense back in July 2025 to provide Claude for use in classified settings. Claude was the only AI model authorized for operations involving classified documents, but Anthropic had two red lines it wouldn't cross — no mass surveillance of American citizens, and no fully autonomous weapons.
The Pentagon wanted Claude for "all lawful purposes" with no restrictions. Anthropic said no.
CEO Dario Amodei put it plainly: "We cannot in good conscience accede to their request."
The Pentagon's response escalated quickly. Defense Secretary Pete Hegseth set a Friday 5:01 PM deadline. He threatened to invoke the Defense Production Act — a 1950 law designed to compel companies in times of national emergency — to force Anthropic to remove its safeguards. He also threatened to designate Anthropic a "supply chain risk," a label that had never before been applied to an American company — it's usually reserved for entities tied to foreign adversaries.
Amodei pointed out the contradiction: "One labels us a security risk; the other labels Claude as essential to national security."
As of this weekend, Trump ordered all federal agencies to phase out Anthropic's products within six months. Hegseth officially designated Anthropic a supply chain risk. And within hours — hours — OpenAI announced it had secured the Pentagon contract Anthropic just lost. Sam Altman said the Pentagon was "genuinely surprised we were willing to consider" classified work.
I want to be careful here, because I'm not naive about the dynamics at play. Anthropic is a company. It has investors. Its $380 billion valuation doesn't come from moral philosophy alone, but it's also a company that chose to absorb a $200 million contract loss, a supply chain risk designation that could ripple through its entire enterprise customer base, and potentially catastrophic business consequences — rather than remove two guardrails it believes matter.
That's not nothing.
In a field where every other major AI company — OpenAI, Google, xAI — agreed in principle to let their models be used for any lawful purpose, Anthropic said no. Legal experts are already questioning whether the government followed its own rules in making the supply chain designation, noting it typically requires a completed risk assessment and congressional notification — neither of which appears to have happened.
I think this matters. And I think it matters beyond this one contract dispute. Because if the message to AI companies is "remove your guardrails or we'll destroy your business," we're setting a precedent that should worry everyone — regardless of where you fall politically.
Both Things Are True
But here's where it gets complicated for me. The same technology that Anthropic is trying to govern responsibly at the high end is being used to churn out AI slop on TikTok at the low end. The tools are out. The watermarks get stripped. The accounts multiply. The content spreads to group chats and gets forwarded without context, without attribution, without a fact-check label in sight.
China's regulatory approach is actually interesting here — they passed national standards requiring all AI-generated video to be labeled and watermarked, with specific requirements for text labels, spoken disclaimers, even metadata tagging. On paper, it's the most comprehensive AI labeling framework any country has enacted. In practice? Chinese social media is still flooded with unlabeled AI content. The rules exist, the enforcement doesn't.
I'm glad Anthropic is holding its line. I'm also aware that one company holding one line is not a solution to the scale of what's coming. The disinformation flood is going to get worse, and most people swimming in it don't know they can't touch the bottom.
Both of those things are true, but I don't have a clean way to reconcile them. I'm not sure anyone does yet.
The Part That Makes Me a Little Sad
Here's something I've noticed recently, and it's a weird thing to feel conflicted about: friends have slowed down in sending me these videos.
I think I know why. I've been the guy who responds to every "OMG look at this!" with "that's AI." The bunnies on the trampoline? AI. The incredible RV concept? AI render. The building that looks like it defies physics? AI. I've walked my parents through it, I've explained the tells to my mother-in-law, I've gently broken the news in group chats more times than I can count. I am the de facto AI police for them all, but I'm not getting as many AI 911 calls as I once was.
And I hope what's happening is that they're starting to recognize it themselves. That the pattern recognition is clicking. That they're pausing before sharing and asking themselves the questions I would ask — where did this come from? Why does this look too good? Why is this camera angle so perfect? etc.
But there's a part of me that wonders if they just stopped sending stuff because it's no fun to have someone rain on the parade every time. And honestly? I get that. Nobody wants to be the person who makes a group chat less enjoyable, but I'd rather be that person than stay quiet while convincing fake content circulates unchecked. I'm trying to be more tactful in that regard.
If you're someone who's been on the receiving end of a "hey, that's AI" from a friend — don't feel embarrassed and don't feel silly. These tools are specifically designed to fool you. They're trained on millions of real videos and images for the express purpose of being indistinguishable from reality. The fact that they work isn't a reflection of your intelligence. It's a reflection of how good the technology has gotten.
Where This Goes
Here's what worries me about the trajectory — and I'll be honest, I go back and forth on how worried to be.
I saw all of those tells in the bear video in seconds because of a very specific set of experiences accumulated over decades. Most people don't have that, heck, most aren't as big of a skeptic as I. The gap between what a trained eye can catch and what a casual viewer sees isn't a character flaw — it's just exposure and practice... maybe a dash of a curmudgeonous quality. The problem is that AI isn't going to wait for the world to catch up.
It's getting better. Fast. The tells I listed above — the fire, the physics, the animal movement — those are all closing. The fire in videos from six months ago looked worse than the fire in videos today. Seedance 2.0 generated celebrity likenesses so convincing that even the tech community was shaken. A year from now, I don't know what I'll be able to catch and what I won't.
That's not hyperbole, it's just paying attention.
The result of that trajectory — if nothing intervenes — is a world where video evidence means nothing. Where the instinct to share something because it seems wild and real is permanently weaponizable. Where the volume of convincing fake content becomes so high that the rational response is to trust nothing — which is its own kind of collapse. Researchers actually warn against this outcome too — they don't want people becoming so cynical that they dismiss everything, because then bad actors can point at real evidence and say "that's fake" and nobody can push back. It becomes a sea of disinformation and the water levels keep rising.
I keep thinking about whether this curve is a J-curve or an S-curve. A J-curve means exponential and accelerating — things just keep getting harder to detect indefinitely. An S-curve means we eventually hit some kind of plateau, maybe because detection tools catch up, or regulation kicks in, or people develop better media literacy. I honestly don't know which one we're on. What I do know is that the pace of improvement in the last 18 months has been unlike anything I've seen in decades of working with digital media.
And look — I'm not going to pretend this isn't also a little bit funny to think about in existential terms. When I start doing the math on where this goes in 10, 15, 20 years, the Matrix starts feeling less like science fiction and more like a design document. Simulated reality, generated environments, synthetic people having synthetic conversations — we're building the pieces right now. We just haven't assembled them yet.
What You Can Do
I'm not going to lecture... further (LOL)... but, I've been thinking about what I'd tell someone who's just now realizing how pervasive this stuff has become:
Pause before you share. That's it. That's the single most impactful thing anyone can do. If a video or image triggers a strong emotional reaction — awe, outrage, amazement, fear — take an extra ten seconds to ask where it came from and whether the source is credible. Be skeptical, it's a good defensive quality to have.
Learn two or three tells. You don't need my decades of digital media or print experience. Just knowing that AI struggles with physics consistency, object permanence, and text rendering gives you a meaningful leg up. Look for things that move too smoothly, backgrounds that shift, and hands or text that look slightly off. And when that becomes indistinguishable, then check the source.
Check the source, not just the content. A video from a verified news outlet is different from a video from a faceless account with "hiking" in the name and 400 clips that all look the same.
Don't feel bad about being fooled. Seriously. A study found that 95% of people can't tell the difference with high-quality AI video. You're in vast company and I too am starting to stand at your side with what's being produced. The important thing is what you do after you learn to look more carefully.
Talk about it. The more people who are aware this content exists, the harder it is for it to spread unchecked. Be the annoying friend in the group chat if it concerns you. I promise it matters more than being popular less we find ourselves waking up in a pod filled with pink goop and a feeding hose.
The bear wasn't real. The bunnies weren't real. The incredible RV concept wasn't real. The building in Cairo wasn't real. And increasingly, we're going to need new instincts and new tools and new conversations to figure out what is.
I saw all of that in seconds because of decades of staring at pixels for a living. Most people will need different tools — and those tools need to exist, be accessible, and be trusted.
In the meantime, I'll keep being the guy who ruins the group chat.
Someone has to.
If you want to know when I post something new, drop your email below. No spam — just a heads up when there's a new post.