I suck at Pong, but apparently if you take my brain out of my skull, mash it up until it’s nothing more than a pile of cells, and hook it up to a computer, I might actually be good at it. Scientists at Cortical Labs have done just that, sort of, by teaching brain cells to play the simple game Pong.

Dishbrain, the rather unimaginative yet ominous name of the project, is 800,000 human brain cells cultivated and grown in a dish. Scientists hooked this horrifying puddle of brain cells and silicone mush to electrodes and began zapping it until it responded. At least, that’s how I understood it. I’m no brain surgeon.

However they managed to stimulate the individual brain cells with electricity, it worked. Dishbrain began to learn how to play Pong within five minutes of the beginning of the project. They used basic Pavlovian methods to teach the gooey brain. If it hit the ball, the cells received a regular electrical stimulus. If it missed, it received random electrical signal spikes.

What surprised the scientists was how quickly Dishbrain picked up on the game. Within five minutes, it was slapping Pong balls around like Serena Williams at Wimbledon. More importantly, it learned as it went, adapting to changes in an eerily human way.

Granted, playing Pong isn’t as impressive as surviving a round of Warzone or advancing through Elden Ring, and the researchers noted the brain was rather bad at the game. But this is a start. 

I’ve always found it hard to believe that AI could reach human-levels of self-consciousness on its own. AI is totally dependent on algorithms and code, and that’s just not how advanced abstract thinking works. But now that Dishbrain has entered the chat, I could see potential for a massive leap in AI. It’s all rather disconcerting if you ask me.

The team behind Dishbrain said their next step is to test how well the petri dish “thing” can play Pong when alcohol gets applied to the neurons. Great. Now I have self-aware drunken AI to worry about. 

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20