I get that they're doing it for the meme. But perhaps something getting close to human intelligence, made out of human cells, shouldn't be forced to play a violent video game without any alternative options? Does 'the meme' justify that?
I dunno. Nothing against violent games myself. Just feels like it's starting to get quite questionable, ethically speaking.
It's just "Thou shalt not grow a brain in a test tube and force it to play a 1993 shooter" didn't make any sense to Moses and therefore didn't make the editors cut.
“I am the Lord thy God. Thou shall not have strange gods before Me.”
I saw this article over the weekend and felt similarly: https://theinnermostloop.substack.com/p/the-first-multi-beha...
> Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move.
And the simulated world they put it in is a sort of purgatory-like environment.
Still I don't understand why they would invite the extra creepy factor of using human brain cells rather than e.g. mouse brain cells. Surely it makes no difference biologically but it's going to lead to fewer comments like this.
>This an impressive simulation. But it's just not honest to call this 'brain emulation', a 'brain upload' or to say that this is doing anything like 'sensorimotor loop in simulation'. Aside from the fact that a connectome is not a brain, and so we have no idea whether the parts that have been filled in by ML actually function like a brain, the motor control in this framework is not even driven by the brain simulation. The output from the 'brain' is not a sequence of motor commands. It is a steering mechanism, a 2-dimensional descending signal (essentially, turn left or right, speed up or slow down). That is then fed into a series of CPG oscillators, outside of the brain emulation, that model fly movement in response to that 2-dimensional descending signal. Since outputting a 2-dimensional descending signal is not what a fly brain does, the simulated brain is not operating as a fly's brain does. It's machine learning, clamped into the shape of a fly connectome, that has a resting state of 0Hz, being zapped with simple inputs, not virtual sensory data.
Nevertheless a worrying direction.
I read the sapiens book once and it had the concept of how humanity had paganism as a religion worshipping just the amalgamation of different animals for thousands of years.
I am writing the comment on what the book said below the image of one of the things humanity has made in recent years Now we have mouse on whose back scientists grew an ear made of cattle cartilage cells. It is an eerie echo of the lion-man statue from the stadel cave.
Thirty thousands years ago, Humans were already fantasising about combining different species. Today, they can actually produce such chimeras.
The image can only be described as an eldritch horror. (Pg 449, of mice and men, sapiens)
The last line of the book is: Is there anything more dangerous than dissatisfied and irresponsible gods who don't know what they want.
I think this last line is something that you are resonating with. (I highly recommend reading Sapiens if someone hasn't. I have only had animal farm and 1984 hook me up to a book so much.)
And then are creeped by 200k neurons that barely find a target when they're told where it is.
You can probably train an ANN with only a few hundred neurons at most to do the same.
But I don’t think anyone “feeling uneasy” should be an argument once the ethical concerns have been considered and experiment has been approved.
This seems very very far fetched. If I understand correctly, these cell brains just respond to some stimuli, it does not seem more intelligent than any automat to me, just creepier.
As for being creepy, the things humans do to other actual sentient beings are exponentially more horrifying and creepy than making them play computer games. If the monkeys that Volkswagen tortured with their exhaust gases were made to play Doom, that would be a much better world. And they are much, much closer to human-level intelligence than this chip.
Ethically speaking, it got "questionable" way long ago; this is not a valid concern for this project imo.
this isn't getting close to human intelligence. They're using about as many cells as a fruit fly has (of course not actually functioning like an animal brain) processing signals to play Doom. The treatment of a single farm chicken is about a few magnitudes more worrying than this.
I'm sorry to tell you that you're made out of human cells and I don't think you got consent from each brain cell before firing up the old boomer shooters.
There's no way the technology to make and modify "life" including cloning humans hasn't been secretly used or attempted at least once ever since it was discovered.
Are discussion about petri dishes diverting relevant resources away from building safety initiatives?
Can I be allowed to torture small animals so long as human suffering persists?
It's awesome.
People's ick around bodies, which are machines, have always held us back.
It wasn't until we started cutting them open that modern medicine was developed.
We might have brain uploads already had we not been so averse to sticking brains with electrodes.
I'll go further: had we not been so scared of cloning, we'd probably have cured cancer and every major ailment if we'd begun cloning monoclonal human bodies in labs. Engineered out the antigens and did whole head transplants. You could grow them without consciousness or deencephalize them, rapidly grow them in factories, and have new blood / tissue / organ / body donors for everyone.
New young bodies means no more cancer, no more cardiac or pulmonary age. It's just brain diseases left as the final frontier once we cross that gap. And if we have bodies as computers and labs, we'd probably make quick work on that too.
Too tired to lay out the case / refute, so past discussions:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Personally, dislike this direction a lot. I don't like that they're using a killing game (I understand the trope, doesn't make me like it any less) and the general idea of this whole thing makes me quite uneasy.
Yeah… That’s quite the smoking gun.
So it’s quite likely then that the neurons are just acting as a bad conductor. The electrodes read a noisy version of the signals that go into the neurons, and they just train a CNN with PPO to remove that noise, get the proper inputs, and learn a half-decent policy for playing the game.
If this worked as advertised they shouldn’t need a CNN decoder at all! The raw neuron readout should be interpreted as game inputs directly.
Besides, they are not streaming the video into the neurons at all. Just the horizontal position of the enemies and the distance, or some variant of that. In that sense it’s barely more than pong isn’t it? If enemy left, rotate left, if enemy right, rotate right, if enemy center shoot. At a stretch, if enemy far, go forward, if enemy close, go back. The rest of the time just move randomly. Indeed, the behavior in the video is essentially that…
While we are at it, the encoded input signal itself is already pretty close to a decent policy if mapped directly to the keys (how much enemy left, center, right), even without any CNN, PPO or neurons.
EDIT: It seems like the readme does address these concerns, and the described setup differs significantly from the description in the critical blogpost. Still not entirely convincing to me, a lot of weights being trained in silicon around the neurons, but it sounds better. I don’t have time right now to look deeper into it. They outline some interesting details though.
> Quote from: https://raw.githubusercontent.com/SeanCole02/doom-neuron/mai...
Isn't the decoder/PPO doing all the learning?
No, this is precisely why there are ablations. The footage you see in the video was taken using a 0-bias full linear readout decoder, meaning that the action selected is a linear function of the output spikes from the CL1; the CL1 is doing the learning. There is a noticeable difference when using the ablation (both random and 0 spikes result in zero learning) versus actual CL1 spikes.
Isn't the encoder/PPO doing all the learning?
This question largely assumes that the cells are static, which is incorrect; it is not a memory-less feed X in get Y machine. Both the policy and the cells are dynamical systems; biological neurons have an internal state (membrane potential, synaptic weights, adaptation currents). The same stimulation delivered at different points in training will produce different spike patterns, because the neurons have been conditioned by prior feedback. During testing, we froze encoder weights and still observed improvements in the reward.
How is DOOM converted to electrical signals?
We train an encoder in our PPO policy that dictates the stimulation pattern (frequency, amplitude, pulses, and even which channels to stimulate). Because the CL1 spikes are non-differentiable, the encoder is trained through PPO policy gradients using the log-likelihood trick (REINFORCE-style), i.e., by including the encoder’s sampled stimulation log-probs in the PPO objective rather than backpropagating through spikes.
yeah!
the whole point was to make neurons BE the neural net
But yes, I agree that they're likely using human brain cells mainly because it's attention-getting.
The Doom project is mostly a bit of fun that demonstrates how quickly someone new to the field can see real results with our platform. Our data scientists believe that biological learning was demonstrated in this case.
(The rat brain guys repeated the experiment until the plane stopped crashing, but no "learning" was happening; it was expected that when the neuron's range reached so-and-so, that the plane would fly level. So they started with a neuron outside that range, showed that it crashed, then adjusted the neuron until it flew level. But that's not what "rat brain flies plane" implies.)
It's "see this input signal, send these output signals", which seems consistent with the title.
It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).
Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.
We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?
It's legitimately horrifying to me.
If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it
> What are the plans to scale up?
I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.
This is a very dark path, and I could not trust the people in charge less.
Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.
Fruit flies have ~140,000 neurons.
The catch is that these brain-on-a-substrate organoids are nothing like actual structured, developed brains. They're more like randomly wired-together transistors than a proper circuit, to use an analogy.
So even though by the numbers they'd definitely have the potential to be your nightmare fuel, I'd be surprised if they're anywhere close in actuality.
We don't need to be experimenting on people, regardless of how many brain cells they may have.
There was a case a few years back about a parasitic twin attached to an Egyptian baby that had to be removed. It had a brain and semblance of a face, but nothing else. But when removing it, they gave it a name, because it was a person.
What do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).
I do agree that the entire presentation in this case is bewildering.
From the video, my impression was "we have yet to figure out an effective way to reward/punish, this is just a PoC of the interface"
Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?
On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).
Is your concern weapon automaton, or animal rights?
When the neurons didn't get stimulated by the application, performance did not improve. The only explanation our data science people has is that the neurons began to learn and perform the desired (highly abstracted) task of 'playing Doom'. This was not a surprise as we've shown this before with a version of Pong using a different platform. We built the CL1 and the CL API to enable rapid iteration on this sort of work.
One benefit to this is that when you have a measurable learning effect, you can measure this before and after exposure to an experimental drug or other molecule. It becomes possible to test the impact on neuron function, not just survival.
I've also seen implementations of realistic neurons, spiking models, etc. In software implementations, what combo of libraries and hardware would equal your 200,000 biological neurons in performance (esp training)? How many GPU's are we talking about?
(Note: If you haven't already, it might be helpful to publish a stack like that so people can experiment with encodings or reinforcement methods at no cost to you.)
We focus on using real neurons, I'm not aware of a software based equivalent. But users can `pip install cl-sdk` to get started with our API. The SDK is still early but supports playing back a recording of real data so applications can be built with a realistic spike frequency. (We'll be releasing a set of recordings for this)
But seeing so many people from the hacker news community reacting to it as normal or exiting is troubling. This is obviously breaching the limits of ethics.
There is a line somewhere here that I personally feel we should not cross.
We know that neurons can produce subjective experience.
This is the first time in my life that I've felt a scientific avenue of research should shut down.
Next up, we teach it to speed run Getting Over It. What a horrible existence.
Sure, a neuron is a machine.
200,000 neurons connected in a matrix is a brain, albeit a very primitive one. Ants have 250,000 neurons in their brains.
The task is to produce outputs (movement) to centre an input and then produce an output (shoot). Then the cycle repeats.
Typically the input it receives is increasingly regular as it does better. And irregular to mark a failure which the neurons learn to avoid.
It has no idea that the data can also be rendered as a Doom game.
Most fundamentally come down to the neurones connections and behaviour can be modelled as minimizing prediction error in their stimulus.
Dopamine encoding an outcome compared to a baseline expectation is very close to tempeoal difference learning.
This is basically what slot machines exploit with their variable reward schemes.
I'd been up for days and had been cramming for a computer architecture exam when I basically passed out.
I had this very visceral nightmare where I was a compiler and the C code was coming faster than I could translate to ASM. I kept trying to escape but it was like I was locked into the work in this never-ending grinding cycle I couldn't escape from. The dream went on for what felt like hours until I woke up drenched in sweat.
Hopefully these neurons aren't satisfying whatever sufficiency threshold delimits consciousness. Approaching some weird ethical territory in any case for sure.
How did you compile? Were you sitting at a desk getting piles of printed code?
Does anyone have insight into how you would even start to source or grow/create the cells?
Also the machines look very organic and clearly have to keep the cells alive. Do they have to change them out every so often?
Today there are several immortalized neuron cell lines used in research to model neuronal function, like HeLa but of neuron type obviously, that are also typically derived from tumours (e.g., SH-SY5Y, PC12) or immortalized via genetic modification (e.g., v-myc) like CTX0E03 [2] which was designed to allow for continuous growth in the presence of particular reagents.
1. https://en.wikipedia.org/wiki/Henrietta_Lacks
2. https://www.prnewswire.com/news-releases/reneuron-announces-...
If you're in the US, you can buy human neurons online at sciencellonline.com/en/human-neurons/
From their previous work on Pong: https://pmc.ncbi.nlm.nih.gov/articles/PMC9747182/
From what I’ve read elsewhere, our understanding of neurons is still very basic, and we need a lot more fundamental research before reaching results like these. We still don’t even properly know how migraines work, nor can we cure paraplegia, yet somehow we supposedly have the capacity to grow second brains and program them on top of that.
My impression is that this company is offering a product that’s still beyond our technological capabilities, much like the cold‑fusion startups that pop up from time to time.
To my knowledge, we understand how an individual neuron works quite well. We just don’t really understand macro effects in large networks of neurons.
The video seems buzz wordy. Without looking into this too deeply, it seems like they’re using neurons individually or in small groups rather than creating a true “brain”. I would guess they’re using neurons or small groups of them sort of like transistors that do a single basic thing rather than a full “brain” that they just feed images to.
Cells have a metabolism, right? They need to be fed and require a specific environment to survive. They age and can die, and they can be attacked by other microorganisms. Are all of these problems solved and applicable on an industrial scale? I had no idea.
Why aren’t we fixing people’s retinas and paraplegia if we can manipulate neurons with that level of precision?
If you connected electrodes to two different fish, shocked them and interpreted twitching as intelligent output, fish could also play Doom. The interface is doing all the work.
It doesn't sound like the neurons have any concept of the game other than "left input means left output", which is a rather trivial result... It's effectively no different than the pong example.
They don't say anything on how much training is required for this to happen, or if there's any "learning" going on at all. The learning part is "next".
I do understand where it comes from to some extent.. some idea that human cells are special i guess, but it seams very naive to me. We spawn, use and kill far more complex AI agents millions if not billions of times every second in this society.
No one gives a shit, as those intelligences are not "real" or whatever.. or they are not "conscious" but conscious is a fictional word without an actual definition. In the end i think it comes down to suffering.
No one knows if some internal part in a LLM is suffering just as no one knows if a cell culture with brain cells like this can suffer.
Perhaps people fear this leads to a future where we have technology that interacts meaningfully with existing wetware (cybernetically enhanced animals, including people).
Me? I didn't like the idea (then or now), but it would be demagogic to try to fight against it, with so much wrong already existing. The difference between a neuron and a nanostructure is merely the embedded technology.
Back in the 50s and 60s, guided rockets used pigeons. Laika in space. Chimpanzees in orbit. Let's accept that we will have bio-drones and Jonny-Mneumonic style upload interfaces.
What would be surprising is for dead human cells to play anything at all.
Encoder: learns which stimulation patterns tend to improve reward
Biological neurons: adapt to the stimulation and generate spike responses that reinforce certain patterns
Decoder: interprets those spike patterns and converts them into joystick movements
right?
We know a human can play Doom, so it kind of makes sense a portion of a human brain can do so in some fashion. But it's way more interesting when an animal that normally doesn't play Doom can, specially if it's just a portion of its brain.
Outside of that, I'm personally not very fond of hardware that can rot or die from malnutrition though. It's fun as an experiment, but as a thing you can actually use I just don't see it. It has a literal limited lifespan, requires more maintenance and imagine trying to debug it ("Turns out it caught some bacteria and it's malfunctioning" kinda scenarios? No thanks.)
Was surprised to see no mention of wetware in the comments.
When someone makes a virtual girlfriend of it, is it really a disembodied person or just a smart answering machine?
A whole lot of ethical and psychological issues are to open up here.
And when you put that virtual girlfriend's brain into a sex bot, is it rape?
We are potentially moving in the direction of uploading conciousness.
Classic humans.
I was under the impression that the relative intelligence of humans versus other animals was largely a function of brain cell quantity, not quality. Can 200k human brain cells really learn faster than 200k mouse brain cells?
A more cynical take is that they're just using human brain cells for shock value. They chose DOOM because of the "can it run DOOM" meme, so they clearly value publicity a lot.
Still, horrors beyond our comprehension.
Why not tackle Robocop next!
If they can get Doom to run on a pregnancy test surely they could get it running on human brain cells?
PS: It's still very cool but also scary.
We can build out discreet systems of brain cells and use them for the purpose we want. They're not going to have traits like consciousness, and we're able to test and assess for that, and build away from it if there is that risk.
Ah, I'm glad they've worked out what consciousness is.
/sFrom their marketing website [2]:
Neural compute on demand: We continuously monitor neural health and performance, ensuring optimal conditions and continuous access to an always-on network of living neurons.
At what size of "neural compute" do we start to call it slavery?[1] https://www.abc.net.au/news/science/2025-03-05/cortical-labs...