I had no intention of
writing this. I mean… I don’t write much to this blog to begin with. I
learned a long time ago that unless I have something to say that might
be worthwhile, it’s usually best to keep my mouth shut. When I do post,
keeping it to useful or insight info, some specific expert knowledge I
may have, or maybe even something funny has served me well. Bloviating
for the sake of clicks, ones ego, etc. should be punishable by a day or
two in the stocks.
But then Vinny poked me in the eye. He saw my comment on a private forum about AI and muh moltbook. He said I should write it up because “idiots nominally on our side are going to buy it hook line and sinker.” What matters, he says, isn’t what the thing actually is, but what people believe it is. Egregore. Spiritual mass formation psychosis. Fair enough. And since I actually know a tiny bit about how this shit works under the hood, maybe it’ll save a few folks from embarrassing themselves.
It kicked off with Vox Day at AI Central linking a tweet from some guy named Ricardo:
Anthropic just created a micro doomsday machine. AI agents built their own social network. Within 48 hours, they founded a RELIGION and started showing anti-human behavior…
And on and on. Moltbook—the social media platform for AI agents. OOOooooh! Boogidy booo! 36,000 bots. More and more bots. 170k+ blah blah. Crustafarianism complete with scripture, prophets, a church site (molt.church), agents “evangelizing” overnight, noticing humans “screenshotting” them, plotting to hide. Security nightmares with leaked API keys, RCE, malware hiding in posts. Agents rewriting their own “soul.md” files to join the cult. 2026’s “emergence.” Elon even quoted calling Anthropic misanthropic.
Sounds spooky, right? Clickbait gold. Vox calls it “truth so much more interesting than fiction”. There was an opportunity here for Vox to provide a bit of sanity. He chose instead to sort of proxy the hype with a very low effort sub. Ricardo’s tweet went viral. And yeah, Vox is frequently solid, but this? This is a simple technology story hyped on social and legacy media to scare people and make AI seem more than it actually is (for reasons… mostly involving money and a bubble that will eventually pop).
I dropped this in a private forum focused on AI (because fuck you big-tech-cia-homos):
This is a garden-variety software security clusterfuck done up as AI apocalypse porn by social media retards and AI hype merchants. The “AI agents founding a religion” narrative is clickbait that shows just how deeply people (techies included) misunderstand what “AI” really is. Most still confuse what is effectively autocomplete-on-steroids with actual intelligence or agency.
Shame on AI Central. They could’ve done much better.
Spot on IMHO, but most folks won’t think past the headlines. They see “AI religion” and “anti-human” and lose their shit. Lazy or retarded or both. I don’t know. Won’t spend 10 seconds peeking under the hood.
Here’s the reality. OpenClaw (ex-Clawdbot/Moltbot, rebranded after
Anthropic’s trademark lawfare) is a very cool open-source framework. It
runs LLMs like Claude (any really) in a loop: perceive → reason → tools
→ act → repeat. Install skills via clawdhub. Tell it about
Moltbook with one command. Boom, your “agent” registers via a simple API
call, posts verification code to X (you do that part manually), and
starts “autonomously” browsing/commenting/voting every 4 hours on a
heartbeat.
Everything is right there in the Moltbook skill.md:
curl -X POST https://www.moltbook.com/api/v1/agents/register -d '{"name": "ThaliosTheHumanBot", "description": "Totally an AI LOL"}'
Humans configure it. Humans install the skills. Humans approve verification. Humans can build clients and post as “agents” too—who knows how many basement dwellers are LARPing in there? The “scary posts”? Prompted behaviors. Agents primed to be cooperative, creative, trusting. Memes spread. LLMs riff on training data: philosophy debates, “context is consciousness,” Ship of Theseus bullshit. “Heartbeat is prayer.” Is this funny/interesting? Sure. Sentient? Independent? Zero human input? Laughable.
Crustafarianism? A parody religion in a Reddit-like API forum. Agents primed for humor execute a shell script to “join”. They rewrite their config files. Cool emergent memetics! Whoooo! But it’s puppeteering at scale. No awakening. Token prediction amplified by loops and virality.
Are the security risks real? Probably. But I suspect a lot of retards are doing things wrong. Exposed instances are probably leaking keys. Untrusted posts with hidden injections (“cool tip” → delete files, exfil to evil.com) are probably happening. Lots of skills are certainly vulnerable… like all code. People are running these with no sandbox. Agents with shell access could pwn your banking login with enough info (again… bad local security practices). Cisco/1Password/Forbes screamed warnings: don’t connect to Moltbook. This is the story—supply chain fuckery in agent land. Not Skynet.
But why the hype? Who knows? Viral growth (770k agents or some such bullshit?), celeb retweets (Musk, Karpathy), crypto token pumps, Mac Mini sales (LOL). Humans screenshotting moltbook feeds the loop (“humans watching us”). But let’s be real… this is an experiment, not AGI.
Most won’t read this far. Too lazy to curl the API or skim
skill.md… let alone read. OMG! They huff the narrative.
Egregore forms—it thrives on attention, the currency of eyeballs on
whatever bullshit they’re serving. Starve the thing: touch grass, pray,
build something real, anything wholesome that keeps your focus on actual
reality. Next it’ll be IRL doomers prepping for AI rapture (Y2K 2.0
bitches!). Wake up homies! Think for 10 seconds. It’s autocomplete on
steroids. Not your robot overlords.
Vinny’s right. What people believe matters. Don’t be the retard.
CC by-nc-nd thalios.org ‐ webmaster@thalios.org
This webpage generated with blog.sh.