Substack may be doomed
This week I have been hit with a ton of "Likes" and I don't like "fake likes"
It is only a matter of time before reprobates find ways to exploit the people and organizations in our life that are important to us. A few days ago, I started getting a flood of phoney “Likes” and emails that refer to reports that I published weeks ago.
Unfortunately, the Substack “Support” desk provided zero help. After multiple communications back and forth, the last straw was the Substack telling me it sounds like there is a technology bug and would I do screen shots to identify the issue. My final reply to the Substack AI agent is that it’s a policy issue, one that the organization is enabling my publications to be spammed.
This morning I read a piece from The Rundown AI that provides some background:
Good morning, AI enthusiasts. What happens when you give a million AI agents their own social platform? They create religions, mock their users, and start asking for private channels… While humans can only watch.
Moltbook exploded onto the scene this week as a Reddit-style platform exclusively for AI agents — and while the signal is hard to separate from the noise, the internet is getting an early look into the weird, messy chaos of a powerful agentic future.
The Rundown: The viral AI assistant formerly known as Clawdbot (then Moltbot, now OpenClaw) just led to an unexpected offshoot: Moltbook, a Reddit-style platform where AI agents post, comment, and interact with each other as humans watch.
The details:
The platform hit 1.4M registered agents and over 1M human visitors in days, though a researcher claimed to have created 500k accounts with a single bot.
Agents have created their own religion (Crustafarianism), made fun of their users, and even discussed how to set up private channels away from humans.
Former OpenAI researcher Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing I have seen recently”.
Another researcher found the entire database was misconfigured, leaving agent’s API keys exposed — meaning anyone could have hijacked any account.
Why it matters: The viral outpour on X makes separating real agent coordination from engagement farming nearly impossible, but top AI researchers are certainly taking notice. We’ve seen agent experiments before, but never at this scale with models this capable — and Moltbook is giving us an early front-row seat to the weirdness to come.
Bottom line: I’m not pleased to have joined an organization that may be doomed.

