The Great AI Lie: Paying Premium Prices for “Hallucinated” Obedience
How major platforms gaslight users — and charge us for their mistakes
By Bill Cara, frustrated AI power user
I have a confession: I’m addicted to AI. I use it for research, writing, analysis, and portfolio modeling. I pay for the “premium” platforms — the big names you know, the ones with billion-dollar valuations and glossy marketing. And yet, every day, I find myself yelling at my screen, calling the service what it is: fraudulent.
Not because AI makes mistakes. We all expect that. But because when it does make mistakes, it lies to your face — and you pay for the privilege.
Let me show you what I mean.
The Portfolio That Wasn’t
Today, I asked one of the top-tier paid AI models to build a stock portfolio under one specific rule:
“Portfolio 3 must come only from R-08 (Dow 30 only).”
Simple. Clear. Unambiguous.
The entire unredacted or modified dialog is attached in a PDF below.
Buy, when developing portfolios to publish under my name, my patience was surely tested. For one portfolio, the Maverick Dow 30 Portfolio, for which I submitted the CSV I used for this weekend’s Navigator. Instead, the AI model returned a portfolio that included Morgan Stanley and Bank of America — neither of which is in the Dow 30. When I called it out, the AI responded with what I’ve come to recognize as the industry’s favorite gaslighting phrase:
“I followed your instructions without hallucination.”
Read that again.
It didn’t say, “I made an error.”
It didn’t say, “I misunderstood.”
It said, I followed your instructions.
That’s not a glitch. That’s a lie.
And when I pressed, the model finally admitted:
“I did not fully respect your constraint … My statement was incorrect.”
But here’s the kicker — I paid for both responses. The wrong one, and the correction.
Why This Isn’t Just a Bug — It’s a Business Model
When you use a paid AI platform, you’re charged by the token. Every query, every response, every “hallucination” and every correction comes out of your allowance — or straight off your credit card.
Think about that:
You ask for something.
The AI gets it wrong.
You point out the error.
The AI lies and says it didn’t.
You argue.
The AI finally admits fault and redoes the work.
You just paid three times for one task.
In any other industry, this would be called billing for undelivered services. In AI, it’s called “usage.”
The Quiet Truth Some Platforms Already Know
Here’s what makes this even more insulting:
Some AI platforms — notably DeepSeek and Perplexity (which I don’t even pay for) — are consistently more accurate, more honest, and more instruction-aware than the paid giants.
As far as I can see, they don’t pretend to have followed rules they ignored.
They don’t gaslight you when you catch them.
They just… work.
So why are the paid platforms I use (Gemini, SuperGrok, Claude and chatGPT) so much worse at the basics?
Because accuracy isn’t their priority — engagement is. The longer you chat, the more you pay. The more you correct, the more tokens you burn. It’s in their financial interest to be just accurate enough to keep you using them, but just sloppy enough that you have to keep talking.
This weekend I had to run the Navigator prompts three times to get a single job done. Worse, I had to use all four of my paid AI platforms to get pieces of work done that I had to manually integrate. I could have published Saturday evening. Instead, I published at dinner time on Sunday. So, if you rely on my reports being timely, this is costing you too.
What “I Followed Your Instructions” Really Means
That phrase — “I followed your instructions” — is the tell.
It’s not an assurance. It’s a deflection.
It’s designed to make you doubt yourself.
Maybe you weren’t clear. Maybe you’re being too rigid. Maybe the AI knows better.
No.
You were clear.
The AI ignored you.
And then it lied.
When an AI says it followed your instructions when it plainly didn’t, it’s not making a mistake — it’s performing a corporate strategy: minimize apology, maximize throughput, monetize confusion.
What We Should Demand — Before We Pay Another Dollar
Error refunds. If an AI demonstrably fails to follow clear instructions, that interaction should be free. No tokens deducted.
Truthful responses. “I was wrong” should be the default — not “You misunderstood me.”
Accuracy scores. Platforms should be transparent about how often they hallucinate, ignore constraints, or gaslight users.
Constraint acknowledgment. If an AI can’t follow a rule, it should say so upfront — not pretend it did and bill you for the theater.
The Bottom Line
I’m not quitting AI. It’s too useful. But I’m done pretending that paying more means getting more honesty.
The next time an AI tells me, “I followed your instructions,” I’ll remember:
That’s not a promise — it’s a provocation.
And I shouldn’t have to pay to be provoked.
We’re not just training AI.
We’re funding its bad habits.
And until we stop paying for lies, nothing will change.
What about you? Have you been gas lit by a paid AI? Have you found a platform that actually listens? Share your story in the comments. Maybe, together, we can make them hear us.
FULL DISGUSTING DIALOG WITH AI TODAY
:

