Once I have a group of neurons like this trained to do something, can I actually count on them to continue performing that task until they die? Or is it possible they spontaneously reorganize or "learn" a previously unseen behavioral pattern?
Of course, then you'll need to debug problems by dripping antidepressants (or psychedelics, or stimulants, or depressants, or various hormones, or....) into the Petri dish.
Biology is wonderfully crazy.
But when the task is not required for a very long time the circuit that turns this task on will weaken, at least in the brain. And neurons will be forming other connections, but that doesn't mean they will necessarily "forget" about older ones.
https://www.cell.com/cell/fulltext/S0092-8674(14)01362-2
Abstract: ”Neuronal plasticity in the brain is greatly enhanced during critical periods early in life and was long thought to be rather limited thereafter. Studies in primary sensory areas of the neocortex have revealed a substantial degree of plasticity in the mature brain, too. Often, plasticity in the adult neocortex lies dormant but can be reactivated by modifications of sensory input or sensory-motor interactions, which alter the level and pattern of activity in cortical circuits. Such interventions, potentially in combination with drugs targeting molecular brakes on plasticity present in the adult brain, might help recovery of function in the injured or diseased brain.”
A question many call center managers ask themselves.
Gee if only we could grow meat to eat in a lab and grow brains on substrate to answer phone calls and generate images, we could do away with the complexity of other humans we might have to interact with. Except... you have even less understanding of what motivates that clump of cells than you do what motivates your office secretary.
Don't get me wrong, it's a super cool idea. I'm just not sure exactly when bioethics went completely out the window.
I believe they certainly will reorganize in case of a (local nutrient) scarcity or damage event. That might result in "unseen" patterns as well, and by "unseen" I mean kinda random.
Artificial NeuroGlial Networks could be really interesting.
Ars Technica has a better article, although they dont describe the dense electode array correctly: https://arstechnica.com/science/2022/10/a-dish-of-neurons-ma...
“[...] the researchers tested two types of neurons: some dissected from mouse embryos, and others produced by inducing human stem cells to form neurons. ”
I suppose the claim to fame in this similar study is the use of the title organoid and there's some legitimacy to that. Form and function are intimately tied in the brain and just a bunch of neurons on a petri dish isn't quite an organ.
It took some time to appreciate that there are some worthwhile ideas in that paper. But it was my first experience with “Academic lies about accomplishment to secure further funding.”
It’s hard to say whether “lie” or “severely exaggerate” is appropriate.
A method existed to measure a signal from a neuron. A second method existed to modify that signal. Whenever the plane crashed, the signal was modified slightly, until the plane flew level.
It didn’t learn to fly. The researcher modified a neuron (steady signal) until it gave the appropriate signal (e.g. zero) to fly level.
Negative signal, plane banks left. Positive signal, plane banks right. If plane crashes, modify neuron until signal is neither positive nor negative. That was the extent of the study.
In that context, do you feel like the neurons learned to fly? Maybe. It’s certainly similar in spirit to reinforcement learning in modern times. But I wouldn’t say that setting a signal to zero is a nice definition of “flew a plane”.
In other words, there was no active feedback; if you pointed the plane in a slightly different direction, it would immediately crash. It wasn’t doing anything more than setting the signal slowly over time to an answer that was known ahead of time. (Keep the plane level by not moving the controls.)
Suppose the neurons learned to draw a straight line. That was essentially what was being demonstrated here. If you substitute “plane crash” with “line becomes a curve”, it becomes much less exciting, to say the least.
“Isn’t that just learning to set a signal to a constant value?” “Yep” “Will it always become the same constant?” “Yep” “Can’t we already do that?” “Yep”
I was so disillusioned that it took many years to stop believing that academia itself was at fault for misinforming the public so badly. After all, it’s almost two decades later, and people still believe “rat brain flies plane” happened in 2004.
If I could go back in time, I’d tell myself not to worry about it; focus on the academics that are working quietly on the frontier, not the ones trying to raise funding for their lab.
At least the neurons in today’s study actually learned to play something. But if the past is any indication, I’d err on the side of skepticism.
Because the Nature citation on where the work was done is literally "The Discovery Channel".
That's a bit like saying battle bots solved locomotion of robots.
https://www.independent.co.uk/news/science/crows-consciousne...
https://www.smithsonianmag.com/smart-news/do-crows-possess-f...
You don't need to imagine hell, it's already here.
Absolutely not. This shows that you can teach neurons to exhibit a reflex behavior adapted to the given stimuli - which shouldn’t come as a surprise. At best this is jellyfish-like level of intelligence.
Low level: proteins/matrices.
Middle level: ???
High level: it thinks!
Sure this Petri dish is not intelligent and conscious. But it's just a load of neurons right, just like the Petri dish in our head. Are we intelligent and conscious in that case? Or are we just a scaled up Petri dish that reacts to more varied stimuli?
"Unimaginable suffering in a petri dish" is not something I want even a tiny chance of creating.
It seems doubtful that hundreds of thousands of human neurons would lead to consciousness.
"Although the company calls its system DishBrain, the neurons are a far cry from an actual brain, Kagan says, and show no signs of consciousness."
...then again, would they be able to tell? There's not a lot of I/O in this situation.
I see this research as a red flag "hey, we are approaching something very bad" and while the flag itself may not be a problem we should heed the warning and change direction.
All this said, my understanding is that this research is in part inspired by or associated with the phenomenon of people with encephalopathy who have greatly diminished brain volumes but are still conscious. This says to me we can't reliably predict how many neurons are necessary before creating consciousness.
How can you be sure that anyone other than you is conscious? Why shouldn't animals be conscious?
That doesn't seem like a more or less difficult question than asking how I can be sure that human neurons are the substrate responsible for producing consciousness. If it turns out that the other humans I observe only seem like they're conscious, then it could also turn out that my neurons only seem to be the substrate producing my consciousness. Of course, this is just terrible epistemology.
Let the backlash happen and as a society let's put a stop to letting religious idiocy get in the way of science.
There is an entire field of study for ethical science and I stand behind that 100%.
Unethical science can get lost!
I'm not a progress at any costs type of person.
But is this actually unethical or just something a religious idiot doesn't like?
Until recently I thought a neuron is a neuron is a neuron. How are they different between species?
I'm of course not against, in general, "culturing human cells", but I have moral uncertainty/concern about those derived from fetuses, as well as any large quantities (in one group) of human neuron cells (or, also, but to a lesser degree, large quantities in one group of other kinds of nerve cells).
That's not to say that I would be confident that what was done would be wrong if they were using cells derived from fetuses, but, I would think it more likely that there would be a moral problem with it. (And this feeling/belief/concern is in part due to religious belief.)
(I can imagine someone saying "But seeing as induced pluripotent stem cells seem to behave in effectively the same ways as 'actual stem cells', how do you know that whatever moral problems you think would be present if using 'actual stem cells', don't also apply to the induced pluripotent cells?" And to that imaginary interlocutor, I must admit that thay have somewhat of a point, in that I suppose I can't entirely rule that out, but, my intuition, which unfortunately is most of what I have to go on because we have not been granted a book detailing precisely every last detail of the metaphysics of personhood etc. , suggests to me that it seems substantially less likely to be a problem.
Even if it might theoretically be possible to use induced pluripotent cells to form a fetus which could develop into a full child which would be a person,
(is this thought to be theoretically possible? I mean, outside of something as general purpose as arbitrarily rearranging the atoms, or using cells from an outside source, and just moving induced pluripotent cells around while treating them with various chemicals and nutrients, is it thought that in theory this could produce a viable fetus?)
it still feels reasonably unlikely that without actually starting to do that, and only converting what type of cells some skin cells are, that this would constitute creating something with any moral patienthood. One such reason being that, I would think, there wouldn't be a clear distinction of how many such entities would have been created. Would it be one moral patient per cell? That seems implausible, especially assuming that multiple such cells would need to be used together to create a viable fetus. One might point out then that identical twins can arise from one zygote by the group of cells that the zygote becomes splitting into two parts, and so if moral patienthood is incompatible with ambiguity-of-number, then this should also apply to blastocyst or whatever. And, perhaps? Though, in that case, there is still the clear differentiation of "these came from this zygote", so perhaps there could be some reason there.
Again, my position is one of uncertainty about these questions, and associated concern. My position is not that I know for certain that it is less morally problematic to use induced pluripotent stem cells than it is to use stem cells derived e.g. directly from a blastocyst, but that it seems substantially less likely to me that using induced pluripotent cells is a problem than it is that using stem cells from a blastocyst is a problem, though neither is certain.)
For the record I do think such a creation should have personhood. And have the right to learn about and interact with the real world.
https://web.archive.org/web/20041023144731/https://www.wired...
https://web.archive.org/web/20041106064802/http://www.napa.u...
EDIT: it seems the researcher working on this project passed away recently :(
"Achilles Desjardins had always found smart gels a bit creepy. People thought of them as brains in boxes, but they weren't. They didn't have the parts. Forget about the neocortex or the cerebellum—these things had nothing. No hypothalamus, no pineal gland, no sheathing of mammal over reptile over fish. No instincts. No desires. Just a porridge of cultured neurons, really: four-digit IQs that didn't give a rat's ass whether they even lived or died. Somehow they learned through operant conditioning, although they lacked the capacity either to enjoy reward or suffer punishment. Their pathways formed and dissolved with all the colorless indifference of water shaping a river delta."
Much of his work, including Maelstrom, is freely available on his own website:https://rifters.com/real/shorts.htm
I didn't have a copy of Starfish handy, and I wasn't sure if gels had been mentioned there or not.
One can't hope to truly know if another individual (be it a human or any other animal) is conscious (the hard problem of consciousness). But if it has eyes like us, mouth like us, plays like us, cries/screams when hurt like us, and even seem to dream like us (asleep dogs sometimes move like they are running in their dreams, and when they wake up they look confused and still perturbed from what they were seeing in their minds), it makes perfect sense to assume that they are conscious like us, and it's the ethical thing to do.
"In vitro neurons learn and exhibit sentience when embodied in a simulated game-world".
https://www.sciencedirect.com/science/article/pii/S089662732...
Like, they are similar to transistors but have more states than off (0) and on (1).
Human neural networks raised in a simulation
The neurons exist inside our Biological Intelligence Operating System (biOS). biOS runs the simulation and sends information about their environment, with positive or negative feedback. It interfaces with the neurons directly. As they react, their impulses affect their digital world.
Our first minds
The dishbrain is currently being developed at the CL0 laboratory in Melbourne, AU. We bring these neurons to life, and integrate them into The biOS with a mixture of hard silicon and soft tissue. Our first cohort have learnt to play Pong. They grow, adapt and learn as we do.
Silicon meets neuron
Neurons are cultivated inside a nutrient rich solution, supplying them everything they need to be happy and healthy. Their physical growth is across a silicon chip, which has a set of pins that send electrical impulses into the neural structure, and receive impulses back in return.
The Ultimate Learning Machine
Those actions have a positive or negative effect in biOS, which the mind perceives, adapting to improve that feedback. The human neuron is self programming, infinitely flexible, the result of four billion years of evolution. What digital models try and emulate, we begin with.
Why?
There are many advantages to organic-digital intelligence. Lower power costs, more intuition, insight and creativity in our intelligences. But most importantly we are driven by three core questions.
What will we discover if our intelligences train themselves?
We know an organic mind is a better learner than any digital model. It can switch tasks easily, and bring learnings from one task to another. But more important is what we don’t know. What are the limits of a mind connected to infinity? What can it do with data it literally lives in?
What happens if we take a shortcut to generalised intelligence?
Machine Learning algorithms are a poor copy of the way an organic neural network functions. So we’re starting with the neuron, replacing decades of algorithms with millions of years of evolution. What happens as these native intelligences start solving the problems we’d previously left to software?
How can we surpass the limits of silicon?
Silicon is raw, rigid, unchanging. Our organic neural networks sit on top of this raw power, but the way they grow and evolve isn’t limited to the software they run on. There is no software, it's coded in their DNA. How will computing change as we shift from hard silicon to soft tissue?
RFN: Request For Neurons
The dishbrain is learning and growing in biOS today, and soon we’re opening an early access preview for selected developers. The biOS is our simulation environment, where you can program tasks, challenges and objectives for our minds. Join our developer program to get early access to our SDK, and secure training time with our minds.