In a suburban Melbourne industrial estate, hidden in a clutter of brutalist buildings and parked trucks, tomorrow’s world is taking shape. Here, an Australian tech start-up called Cortical Labs has caused an internet sensation. More than 40 million people have watched a clip of disembodied human brain cells playing the 1990s video game Doom. These cells are kept in petri dishes, wired up to computers and trained to do whatever the researchers want.
“Right now, the cells play a lot like a beginner who’s never seen a computer,” says neuroscientist and Cortical Labs’s chief scientific officer, Brett Kagan. “But they can shoot, they can spin, they can seek out enemies and, while they die a lot, they are learning.” Doom is not only violent, it’s multidimensional, chaotic and unpredictable. Human players make decisions and act in split seconds. In the video the company posted on X, you can see the cell-controlled player moving through Doom’s tunnels and passageways, firing shots at enemies. The brain cells are behaving as though they, too, are deciding and acting.
Cortical Labs’s staff are mostly young and all are evangelistic about what they’re doing. The office attached to the lab could be any communal workspace in Silicon Valley. It’s opposite a yoga studio and a kitchen tiling company. But inside, it’s all very serious. I was shown a powerful microscope through which I could see the ghostly tangle of neurons. Their ganglia strings transmit and receive tiny electrical impulses, all superimposed over the neat grid on a microchip.
The cells are kept in Petri dishes, wired up to computers and trained to do whatever the researchers want
When the cells succeed in a game, a predictable signal is sent: 75 millivolts at 100 hertz for a tenth of a second, delivered across all eight stimulation electrodes at once. When the cells miss a shot in the game, it triggers something quite different: four seconds of disruptive electrical stimulation at 150 millivolts and 5 hertz, delivered at random through the eight electrodes. The idea is that the cells will want predictability rather than random disruption. It’s very Pavlovian, zapping brain cells to achieve desired outcomes. But the Cortical Labs team was keen to tell me that these cell clusters are not developed human brains. They do, however, share the same basic biological building blocks. We all seek order over chaos.
This combination of cells and silicon has been inserted into an array of servers, all housed in a sterile computer room-cum-laboratory. The server room is staffed by gowned, masked and gloved scientists and technicians. Inside the customized server boxes, the cells are nourished by a special nutrient solution and their climate is meticulously controlled at 37 degrees centigrade – human body temperature.
Cortical Labs grows its own neurons from blood given by “eager” donors, including, says Kagan, the company’s founder, and which is then reverse engineered into stem cells. These are different from normal cells in that they are yet to develop into specialized tissue with set functions. But that doesn’t make them any less alive. These are the kinds of cells that make up embryos. Some other companies in biocomputing buy raw human material from donors who probably don’t know how their cells are being used, let alone commercialized.
It all sounds rather dystopian, but proponents of biocomputing say its benefits could be huge. The energy needed to stimulate a neuron is next to nothing; an entire biological computer uses around a sixth of the energy of silicon chip hardware. With Iran blocking the Strait of Hormuz and oil costing around $100 a barrel, these economic factors become much more tempting.
Computing power is also impressive. When Cortical Labs trained the brain cells to play the much simpler 1970s game Pong, they took about 20 minutes to master the moves. A silicon chip using standard learning software took 11,000 sessions, the equivalent of 52 hours. Artificial intelligence data centers do the same kind of machine learning, but require huge amounts of energy. Data centers across the globe use roughly the same amount of energy as the entire country of France. As demand for AI grows, so too will the energy demands of these server farms. Wetware, as opposed to silicon hardware, promises to speed up the process of AI training for a fraction of the electricity.
Some advocates even think the neurons’ DNA can, in time, become microscopic data storage units, organic USB sticks invisible to the naked eye. The genetic code could one day be edited to store the binary ones and zeroes that make up all computing software. While this kind of computing is never going to replace silicon, it may soon become a complementary process. Cortical Labs is already offering coders the chance to use these cells via the cloud. Biocomputing’s champions claim it promises a brave new world with endless possibilities.
Last year, researchers at Johns Hopkins University announced their success in creating a “mini-brain” organoid that mimics some features of the real thing. The researchers said these organoids will help us to better understand neurological disorders such as schizophrenia and Alzheimer’s disease, while aiding in the development of new drug therapies.
For all the excitement, it’s worth remembering that these hijacked neurons are still alive: they still respire and excrete and do all the things we learned about in biology lessons. And they are being deployed with ever-increasing sophistication, as seen at Johns Hopkins. In a few decades, we may well be creating entirely accurate copies of human brains from stem cells. Full temporal lobes and prefrontal cortexes are surely next, all grown from nutrient solutions and bioreactors rather than a mother’s milk.
That means we’re going to have to start answering the ethical questions now. For example, should we consider these clumps of brain cells at least partly human? Do they think in any meaningful way? Can they feel pain? Do they suffer in the course of being used by a computer? And at what point does something go from being a lump of biological matter to an intelligent form of life?
“We work with synthetically generated cells in a dish, not living animals,” Kagan says. “We take ethical concerns very seriously, because our society rightly expects scientific advancement and ethical progress must occur together.” When I push Kagan on these ethical questions, his answer reveals something interesting. “We rigorously question ourselves to ensure what we do is for the greater good,” he said.
It seems that, as long as their work is in aid of some greater good, any moral questions about the cells themselves become moot. It’s an argument I understand – and in some ways sympathize with. Cortical Labs takes a utilitarian approach when it comes to helping people. Why worry about the ethical categories of lab-grown cells when there are real humans suffering from real diseases that could be cured? But then again, what has playing Doom got to do with the greater good? A paper published by Kagan and other Cortical Labs employees accepts that “these neural cultures would meet the formal definition of sentience as being ‘responsive to sensory impressions’ through adaptive internal processes.” Read that again: sentient. These researchers acknowledge they have created a biological object that is, in some sense, aware.
At what point does something go from being a lump of biological matter to an intelligent form of life?
Cortical Labs and its competitors are turning science fiction into fact faster than the public can comprehend what they’re doing. While they aren’t yet big enough to have an in-house ethics committee, Kagan says Cortical Labs consults a wide network of scientists and ethicists to guide them in what they do. But it only takes one cowboy operator to bring the whole sector into disrepute by creating cyborgs in petri dishes. However noble the intentions, biocomputing scientists and entrepreneurs shouldn’t be left to police their own concepts of progress.
Some countries have already begun to regulate the discipline. The Biosecure Act, passed by Congress in December, creates a framework where “biotechnology companies of concern” can be cut off from federal procurement, grants and contract work. The European Union is currently considering a stronger biotechnical framework that would affect biocomputing, with substances of human origin being more heavily regulated. Meanwhile, Australia still has patchy regulation of the place where artificial and biological intelligence meet.
There are great benefits for mankind in what biocomputing promises, but there’s also a real risk that, in a largely unregulated environment, the science moves so quickly that we barely have time to consider the consequences. I admire the achievements of pioneer scientists like those at Cortical Labs. But I worry, too, that humanity might be ushering in its own Doom scenario.
Comments