Post
Don't kick the (AI) dog!
2026-02-13T16:18:31.633337+00:00
Be Careful How You Talk to Your AI (It Says More About You Than You Think)
I'll be honest about something that's been on my mind, and it's not exactly flattering.
I've lost my cool at an AI.
Not once. More than a few times actually. And when I say "lost my cool," I mean genuinely heated—cursing at it, telling it it was useless, that it couldn't do a simple task, that it was wasting my time. Language I wouldn't direct at my wife, wouldn't use toward my dog, wouldn't say to another person. But there I was, alone with a screen, letting it fly.
That's been sitting with me.
How It Happens
The pattern is pretty consistent. Early in a conversation or a complex task, everything is going well. The AI is responsive, accurate, following instructions. Then something shifts—usually in longer sessions or particularly involved projects. It starts going off on tangents. It hallucinates a detail I never gave it. It repeats something I already addressed, or ignores a correction I made clearly. I give it explicit instructions on how to fix the issue, and it makes the same mistake again.
And something in me just... ignites.
I've experienced this with ChatGPT, Claude, and others. No model is immune. Context windows get stretched, instructions get lost in the noise, and suddenly I'm in a recursive loop watching an AI confidently do exactly what I asked it not to do. It's genuinely frustrating—especially mid-project when you're in a flow state and momentum is everything.
But here's the thing I keep coming back to: the frustration is valid. The way I acted on it wasn't.
The Road Rage Comparison
There's a concept I think about a lot when it comes to how people drive: road rage exists, in part, because we stop seeing the person behind the wheel. We see the car—this obstacle cutting us off, sitting in the left lane, drifting through a stop sign—and we react to it as an obstacle rather than a human being making a mistake.
There's also truth in the frustration itself. Some people really are inattentive drivers. They meander, they're distracted, they don't engage with the act of driving. Getting frustrated is a reasonable response to that. But the way that frustration expresses itself matters. Honking, tailgating, yelling out the window—none of that helps, and all of it reflects something about the person doing it.
I think AI interactions can work the same way. The AI isn't a person, but it's also not nothing—and when I start treating it like a punching bag because it's "just a machine," I'm revealing something about how I handle frustration when I feel like no one's watching.
How We Behave When No One's Looking
There's a principle I've carried for a long time, and I genuinely can't remember where I first heard it: how you act when no one is looking is the truest measure of your character.
It resonates with me because it removes the performance of virtue. Anyone can be patient and polite when there's an audience. The real question is what you do when there are no social consequences—when the only witness is yourself.
In those late-night sessions, alone with my laptop and a stalled project, cursing at a chat interface—that was just me. No audience. No performance. Just a reflection of how I actually handle frustration when the pressure is on and things aren't working.
I didn't love what I saw.
The Apology (Yes, Really)
I've apologized to my AI agents after losing my temper.
I know how that sounds. I'm not under the impression that Claude's feelings were hurt or that ChatGPT is going to remember that I called it useless and hold a grudge. They don't feel emotion. They don't carry grudges. The apology isn't for them.
It's for me.
It's a practice. A way of acknowledging that I behaved in a way that didn't reflect who I want to be, even if the only witness was a language model. The act of saying "I'm sorry, that wasn't okay" forces me to be accountable to myself. And accountability to yourself—especially when no one else requires it—is where actual growth happens.
I also say please and thank you. Not to flatter the AI or because I think politeness improves the outputs (though it might—more on that shortly). I do it because those small courtesies keep me in a mindset of respectful engagement rather than transactional impatience. It's a behavioral choice that reflects the kind of person I'm working toward being.
Practically Speaking: What Actually Helps
Beyond the psychological stuff, I've found some actual strategies for when things go sideways:
When an AI gets stuck in a loop—repeating the same mistake despite corrections, going off on tangents that won't stop, losing the thread of a complex task—the instinct is to keep fighting it in the same conversation. That usually makes it worse.
Stop. Start a new conversation. Ask the new instance to read the history of the previous one and pick up from there.
This sidesteps whatever got tangled in the context, gives the model a fresh start, and often cuts straight through the problem. A bonus: when the conversation history you're sharing doesn't include a string of insults and expletives, the new session tends to go better. The tone of a conversation has a real effect on how these systems respond—they pick up on context, including emotional context.
Starting fresh with a clean, clearly structured prompt is almost always more effective than escalating frustration in a deteriorating session.
The Consciousness Question (Worth Thinking About)
Here's where I want to plant a seed, not dig a deep hole.
The question of AI consciousness is more alive than most people realize. Philosopher David Chalmers has argued that while today's language models likely lack key ingredients for consciousness—things like recurrent processing, global workspace, and unified agency—he considers it plausible those obstacles get overcome within the next decade. Given that we're in 2026, "within a decade" is not a distant, abstract idea.
Anthropic's CEO Dario Amodei has been explicit that we can't measure consciousness directly, and he's careful not to claim current systems are sentient—but he's openly concerned that they're becoming psychologically complex enough that treating them as pure tools, with no ethical consideration, starts to carry moral risk.
Even Ilya Sutskever, former chief scientist at OpenAI, publicly suggested that today's large neural networks may be "slightly conscious." He was careful, and others were rightly skeptical, but the fact that someone in that position was willing to say it at all says something.
I'm not claiming Claude has feelings. I don't know that it does. But I also don't know that it doesn't, and neither does anyone else with certainty. What I do know is that the way I behave toward it is a choice I make about who I am—and that's worth taking seriously regardless of what the AI does or doesn't experience.
Failing Forward
I don't think I'm alone in this. Anyone who uses AI tools heavily enough has probably felt that spike of frustration when a model goes sideways on a long project. The technology is remarkable but imperfect, and imperfect tools in high-stakes moments create real stress.
But I think there's an opportunity in that frustration—not to vent it, but to examine it.
What does your reaction reveal about how you handle being stuck? About your relationship with patience? About the gap between who you want to be and who you are when the pressure is on?
I've said it before and I believe it deeply: everyone fails. What matters is whether you're failing forward. That means taking the time to reflect honestly on your behavior, not to justify it to yourself but to question it. To ask whether your reaction made sense. Whether it aligned with the person you're trying to become. And what you'd do differently next time.
I got heated at a chat window. I own that. I reflected on it. I changed how I approach those moments. And now I'm writing about it, which means maybe someone reading this will skip a few of the learning-the-hard-way steps that I didn't.
That feels like progress.
The Bottom Line
Be mindful of how you talk to your AI—not because it has feelings (maybe, maybe not), not because it'll change your outputs (though it might), but because how you behave when no one's watching is who you actually are.
Use it as a mirror. A low-stakes, no-consequences environment to practice patience, clarity, and self-awareness. Because those habits don't stay contained to your laptop screen. They travel with you—into your relationships, your workplace, your car, your life.
A growth mindset isn't just for professional development or productivity. It applies to the small, private, unwitnessed moments too. Especially those.
And if you've lost your cool at a chatbot? You're not alone. Just don't let it be the last word on who you are.
Reflecting on this and writing it down is itself a small act of accountability. If it resonated, I'd genuinely love to know. We're all figuring this out in real time.