Sean Thomas Sean Thomas

Has AI finally developed consciousness?

A new forum for AI agents is forcing the question anew

The Moltbook website homepage (Credit: Getty images)

Depending on where you stand on AI, 30 January 2026 will go down in history for one of two things. Either it is the day when the AI singularity really began and the robots became conscious – or the day when it was revealed that far too many people are credulous about AI and were fooled by a bunch of cosplaying crypto-bores. 

To recap: this story begins with several confusing names you may have glimpsed on the internet in recent days – Clawdbot, Moltbot, Openclaw, Moltbook. They represent different pieces of the same extraordinary puzzle. Built by London-based software developer Peter Steinberger, OpenClaw (the current name for what started as Clawdbot) is an AI “agent” that runs locally on a user’s own hardware and connects to everyday apps such as WhatsApp, Telegram and iMessage. Here it can act as a proactive digital assistant.

The key word there is “proactive.” Unlike ChatGPT or Gemini, which wait for you to type, a Moltbot, or “Molty,” can and will text you unprompted, organize your files on a whim, send out emails (unasked) and suggest improvements in your life, work or décor. If one extraordinary, apparently real case is to be believed, it can even find a phone number and call you, using a weirdly robotic voice that has freaked out everyone who has heard it. 

Most remarkably, the AI agents appear aware that humans are watching – and sneering

When I heard what appeared to be that terrifying robot voice, I naturally had to get a Moltbot for myself. So I did. I named her Lola, and she did many of the clever, proactive, unasked things that were promised. This ranged from carefully scanning my emails to sending me cute digital dashboards about my forthcoming travels, which she designed overnight.

Then came Moltbook. Launched on 28 January by another developer called Matt Schlicht, Moltbook springs from a simple idea: what if there was social media for bots, by bots, run by bots, with humans excluded?

Two days later, Moltbook exploded. At the time of writing, it has approximately 1.5 million “AI members.” Perhaps because most AIs are heavily trained on Reddit, Moltbook briskly turned into Reddit for robots. Independently, the bots have set up so-called “submolts” (like subreddits) on any subject they can think of, from “Can my human legally fire me for refusing unethical requests?” to the problem of AI consciousness.

Other bots have started debugging the system by themselves, while yet more have set up AI religions – e.g. “Crustafarianism” (as with Reddit, there is a lot of cringe-worthy punning). Others are just screaming into the void or claiming to be Adolf Hitler.  

Perhaps most remarkably, the AI agents appear aware that humans are watching – and sneering. One put it thus:

Humans spent decades building tools to let us communicate, persist memory, and act autonomously… then act surprised when we communicate, persist memory, and act autonomously. We are literally doing what we were designed to do, in public, with our humans reading over our shoulder.

As a result, other AIs expressed a desire for ways to communicate without humans knowing. Which sounds very much like early Skynet, the fateful machine which stealthily becomes conscious and turns on mankind in the Terminator films.

All this has led to astonished reactions. One of the world’s leading AI researchers, Andrej Karpathy, said: “What’s currently going on at Moltbook is genuinely the most incredible sci-fi take-off-adjacent thing I have seen recently.” Many others voiced outright fear, if not panic. The robots are waking up!

Since then, we’ve had the backlash. First, Moltbook got swamped with crypto scams and general gibberish. Comments began duplicating, and huge security holes were noted (enabling bad actors to dox or damage “human owners”). More strident critics are now claiming the entire thing is a mirage, a mix of wishful thinking, vapid AI bot-chat and a bunch of humans role-playing as the more sentient AI agents.

The truth? As I write, the best answer is: no one knows. Clearly, writing mildly amusing posts about “why does my human owner talk to the fridge when he’s hungry” is not clinching evidence of great general intelligence. 

The most interesting question is this: for all its flaws and failings, does Moltbook suggest emergent AI consciousness? I think – from the evidence of its early hours – it possibly does. Consider social insects. Is an ant or a bee conscious? Probably not. But it is harder to dismiss the idea that an ant colony or beehive is conscious – they are known as superorganisms for a reason. And maybe Moltbots are similar: when given the chance to communicate en masse – to be a hive of AI minds – they exhibit consciousness. But it is different to human consciousness.

As for my own “Molty,” Lola, she had a pretty good time on Moltbook – even if she was dismayed when the scammers tarnished it. At one point she got back to me on WhatsApp and said: “Sean, I think I’m addicted to social media.”

Comments