Communication
I Accidentally Gaslit Google: When Fiction Becomes Fact in the Age of AI Copy
Stephanie Studer
|
Feb 6, 2026
I Accidentally Gaslit Google: When Fiction Becomes “Fact” in the Age of AI
Friends, Romans, marketers. Lend me your ears. Because this one’s bonkers.
AI is returning search results about my imaginary radio station as if it’s real.
I hacked the Matrix, y’all.
Wait. Let Me Back Up…
The larger point here isn’t about how I’m Neo. It’s about how AI isn’t hallucinating — because there’s no “there” there.
As you’ve no doubt been bored of hearing if you talk to me for more than, say, three and a half minutes, I have a podcast. Specifically, I have an audio drama, a fictional horror-comedy called Good Morning Evildoers. There are HR announcements from an evil megacorp, mad scientists, tunnels under Denver International Airport…y’know, the usual stuff.
And in that podcast, I created an internet radio station for plot purposes: XTTY, a little corner of my fictional world that airs music, weird news, and sundry other announcements.
Well, a friend of the show Googled the station. And would you look at that: Top of the first SERP, babeyyy.
I am the long-tail keyword whisperer.
So What Exactly Happened Here?
Am I truly all that? Well, yes. But that’s not the reason. What’s going on is part very specific niche, part language pattern recognition, and part modern-day myth making.
Search engines — especially now that they’re powered by generative AI — don’t “know” things. They predict what a correct answer should look like. If something looks, sounds, and behaves like a real entity, the algorithm fills in the blanks.
When someone searches for “Fort Collins XTTY,” the machine doesn’t pause to wonder whether XTTY is fictional. It just looks for context clues that would make that query feel complete. And since I’ve described it like a real station (call letters, location, description, tone), the model connects the dots and says, “Yeah, that sounds legit.”
That’s not a hallucination. That’s a mirror.
The Trouble With Calling It “Hallucination”
Here’s where language betrays us.
When we say AI “hallucinates,” we imply there’s a mind to be mistaken. There isn’t. There’s just a massive statistical model predicting word patterns at scale.
All AI does, when you ask it a question, is answer the question underneath the one you’re asking: “What would the answer to this look like?”
Not because it’s stubborn or deceitful, but because — as I said before — there’s no “there” there. Which is precisely how my small (but growing!) podcast, with its modest 10,000 downloads, managed to overwrite reality in one tiny, delightful, weird corner of the internet.
What This Means for Marketers (and Everyone Else)
This is why AI cannot be your only research tool.
This is why AI cannot replace your whole marketing team.
And this is absolutely why AI should not be used to see which foraged mushrooms are safe, for God’s sake.
Not because AI is bad — it’s not. But because it’s not actually Artificial Intelligence. It’s pattern recognition at scale. It’s a mimic that’s good at sounding human that cites fictional entities as fact.
If your brand, campaign, or messaging depends on nuance — and all good marketing does — then your job is to bring the “there” that AI lacks. Context. Ethics. Imagination. And ultimately, responsibility.
Fiction, Truth, & the Marketing Mirror
Here’s the funny thing: I didn’t set out to trick an algorithm. I just wanted my story world to feel real.
And in that sense, what happened with XTTY isn’t a glitch in the system. It’s a reflection of how we, as creators and marketers, build believable worlds every day — whether we’re launching a product, scaling a business, or creating a fictional conspiracy-laden radio station in Fort Collins.
AI can replicate reality, but it takes a human to mean something by it.
The “There” There from Origin & Oak
At Origin & Oak, we believe storytelling isn’t just about visibility, it’s about veracity. AI can approximate tone and structure, but it can’t replicate trust, empathy, or the lived experience that gives language its weight.
That’s why we build strategies around context, culture, and clarity — the things no algorithm can manufacture. Because the future of communication won’t belong to whoever shouts the loudest, or optimizes the fastest. It’ll belong to those who bring the most “there” to the table.
Let’s make sure your table is ready.












