Maybe our personality and being is just a result of our training data with a few genetic quirks thrown in. And what am I really doing when I “decide” or “create” something? Is it that much different from a LLM’s generation of language? Or an AI image generator?
It should be no surprise. I'd have thought most HN readers have at some point attempted to kill their own ego.
Maybe it started when you were younger and learned that society used to believe the Earth was the center of the universe. Perhaps you then considered that humans too are nothing special — maybe not created in God's image after all? Instead the latest evolutionary result in a long complex process.
Dashing geocentrism, anthropocentrism should have made many of us skeptics about anything considered special.
Why should consciousness be inscrutable, magical?
I've been sure we're little more than imperfect machines for some time now.
None of this struck me as a revelation, it just felt like growing up.
It's honestly a little sad that something so simple as an LLM is causing so many people to come up short against these basic philosophical assumptions. The human brain weighs about 1.4kg (most of that being metabolic or structural support), runs on about 12 watts, and it takes only 3.2 billion base pairs to make a new one with a default template and peripheral systems already included. It is not that special.
Cosmologically, the world can be full of special, beautiful, unique things AND be merely the result of some computation. That is what a fractal is, no?
So if your concern is with your mind being special, that's not a problem that needed to be addressed by saying "nothing else in nature possesses this particular fractal boundary". You merely have to be one point on the curve, somewhere, and to the extent that an AI could replicate it, it would still be an approximation of such. It might be a very good one, but it wouldn't have the same causal relationship to nature, and therefore would be special in a different way, for the same reason that when you turn over an hourglass, the grain of sand that was special for being the last to fall now becomes the grain special for being the first to fall - it's "just" sand, but it's also positionally different.
So the world could be deterministic, but the interpretive meaning of that might not be "chains of fate," but a perpetual explosion of color and possibility, in which I will never know for certain in what way I might be unique, but it is far more likely that I am than not.
There's reasons to be skeptical of things, but come on.
What that long philosophical post is about, that thinking that much just leads to questions about existence, and Christian religion has answers.
Spending too much time with computers can cause thinking, that everything is a computer. It is not. Being a human has all feelings etc extra stuff. Watching Matrix movie too many times can make someone imagine we are in Matrix. We are not.
It is just, that world is going through hype bubbles, when everything at news is just about one topic: 1. Corona, 2. War, 3. AI.
AI is about pattern matching. Calling many old stuff with new name AI, just to sell stuff. Making it up as they go.
AI is just some hardware and software, human written answers, recognizing text/image/etc and matching it with some answers. But it only answers what it is asked about, and makes up something that may not be true. So there is need for humans that try to understand is that answer useful or not. Humans will be always needed, to help verify answers are not too dangerous.
If AI asks AI, that kind of system usually causes AI to hallucinate more and mess up usefullness of that AI model. If AI would "think something", it needs checking, is that useful at all.
AI can not take over the world, those thinking that has just watched Terminator movies too many times. Most important data is at offline local networks. Every online service is busy adding more security protections and trying to always stay online.
AI, as software, is useful for many purposes. It is a new tool. New tools helps in many kind of work. This has happened many times, like after horses there were cars, it can change what kind of work is needed. There is big need for more coders. Coders are needed to debug code, check does code have vulnerabilities. AI tools help programmers to figure out some syntax, like Bing AI helped me to convert SQLite SQL Query to MongoDB query. Just do not copy paste propietary code and secrets to AI chat.
There is no magic. Magic means, you are using some technical terms, and writing unclearly, just to hide something, to sell some stuff. The same can be written in easy to understand language.
It's completely magical to me that I'm now aware that you think you awareness is nothing magical
> The mind is like a calculating device
How does that explain your self-awareness?
It's like the story of Jimi Hendrix dropping acid and playing amazingly well. Or, rather Jimi under the influence thought so — until he heard the recordings.
Not really though, as I understand it. Simulating from the bottom up would be akin to simulating neurons and such, or even the molecules that make up the brain and the connected systems. What we've done is creating a complex system whose emergent behavior's output is similar in specific ways to a human brain. And significantly more so, compared to the previous attempts.
And yes, many aspects of our existence is comparable to a machine. Which can be a very good thing, since we can apply many of the knowledge to get improvements, like we can train, debug, and fix some things. In fact I consider the realization you're having essential for personal development, because you can now leave behind things that you previously considered "you", now that they turned out to be not really "you" but just something you do, and most others do too, to get closed to what "you" really are. And do better in things that you think "you" are not!
Even the mistakes it made seem like the kind of mistakes people make, not the kind of mistakes software usually makes.
Maybe you already know, but this 'it's all just a train of mechanical compulsions, there's nobody in here' is the very thing that meditation is intended to allow you to realize, it's the 'liberating insight'. The problem with just reaching it on your own is you don't get the other part, the viewpoint about the world that build the case that this is a good thing, a great thing (believing there is someone there is the cause of all the dissatisfaction you ever feel). So you could check into that if you want - ajahn brahm or suchlike
I hold the belief that we are just self-replicating biological machines. There is no “magic”, there never was.
If none of this is magical, what do you call it, and what do you call magical?
One thing I would like clarified: I don’t follow the refutation of epiphenomenalism. Is the intent that I can raise my hand only as a result of my conscious experience? Couldn’t one imagine instead a LLM hooked up to a robotic arm that also is trained to raise its hand it reaction to similar text? I feel like that or something similar would refute this. How can you say was is causal in sending the signals to raise the arm? Why can’t the conscious experience just be a theater?
What’s more weird to me about epiphenomenalism is that consciousness seems like it need not exist - if it performs no function than why bother with it? So why do we experience it? Sort of seems that its very existence implies that it should be more than just a theater.
I’m a total amateur about this, but would appreciate recommendations if anyone resonates with this.
It feels like this debate around LLM consciousness is a tempest in a teapot caused by yet another scientific advancement leaving even less room for supernatural theories. Which makes a whole lot of people worried that consciousness isn't actually so special, who then lash out to defend it as something unique to humans. I think in actuality, some scale of computational process that feeds back on itself can create what we call consciousness. I think there is no way to actually know if this is happening. We still haven't solved this for humans - I don't actually know that anybody conscious will read this comment, and you don't know that the person/process that generated this comment is conscious! I have no idea if consciousness is a possibility with current LLM implementations, especially with their lack of long term feedback - it would seem an LLM's sense of self can only ever be some integral of the sense of self of the authors of the source material [0]. But I know at a certain point we're going to have to apply the same standard of consciousness we've done for all humans, which is believe them when they assert consciousness while demonstrating significant intellect, possibly involving various displays of physical power that lead to mutual wariness/respect.
[0] although it's interesting to think about the larger feedback loop consisting of LLM output going into the training dataset for the next model, and LLM output making it into humans' brains that then write material for the training set, etc. But this would likely still not be consciousness the way we perceive it, because it's too long/slow.
I find his reasoning behind this "disproof of epiphenomenalism" not just unconvincing but entirely ludicrous and I stopped reading right there.
There is nothing inconsistent about the position that the biochemical operation brain is the sole cause of consciousness (epiphenomenalism), and the capacity of the brain to observe consciousness. In technical terms, imagine a program that creates a private file and writes some contents to it. The fact that same program subsequently reads the file - maybe even branches on its contents - is in no way a refutation of the fact that the file is epiphenomenal to the program.
I'm a very receptive audience to a refutation of epiphenomenalism, but this one isn't clicking for me.
Where the author did lose me though, is the subsequent discussion of pain, and the further discussion of value. These are much deeper topics which require a slower and more deliberate treatment, and I felt like they were being handwaved away with some unjustified assumptions. Why is pain irreducible to neural activity? In what way and for what reason are pain and pleasure dependent on consciousness? What does it mean for a thing to 'matter', and why can things not matter irrespective of one's conscious experience? If an unconscious coma patient is caused great pain, does that not 'matter'? What does it mean to 'confer significance' on something? Why is that the exclusive preserve of consciousness? Does an AI bot playing a video game also not 'confer significance' on threats it is programmed to avoid?
Since science can't do that, enter his hunches and intuitive "feelings" which are nothing but wishful thinking for a universe where consciousness matters (to whom?). I feel that one could find his "what I had been missing" finale in any number of new-age self-help books.
What if I haven't found that special someone yet, the one that will help me raise the next better LLM? I think I already got like 40% of the dataset I find valuable, all I need is to find someone with the other 40% and we could go in a journey together to find the remaining 20%. Sounds like the adventure of a life time.
Plants also respond to various stimuli in the same way but on a different time scale. Plants do care about such stimuli, whether you consider that suffering or not. So either sentience is not required for this kind of reaction or plants are also sentient. The same is true for microbes.
If they are sentient we have to rethink a moral vegetarianism that's based on sentience. If they are not, but they respond in the same way, then sentiocentrism should indeed be questioned.
I guess my point is that we still don't have any idea how to determine whether something is sentient. That includes other humans (though for that, we just apply Occam's Razor and call it a day).
I do think that in a few years, we'll have LLMs with plugins for memory and perception, executing not in a web-based repl but in real-time, and it's going to get harder and harder to argue that they're not sentient.
Phineas Gage had an accident: a steel rod was driven from his jaw to the top of his head, obliterating huge sections of his brain. And yet, it was a completely non-lethal attack. Despite losing significant brain matter, Phineas Gage immediately got up, was conscious and seemingly only lost eyesight in one eye.
But Gage's personality, mental state, and other attributes were permanently changed. The early science of psychology (well... I dunno if you could call it "science" yet...) took Gage's accident and began to map out the physical mechanization of our minds.
---------
Lobotomies and other such experiments would then map out which parts of the brain did what things. This was all figured out in the late 1800s and early 1900s. Cut this part of the brain out, the people can't talk anymore. Cut this part of the brain out, and you cannot transition short-term memory into medium term memory. Etc. etc.
The fleshy parts of our brain are... fleshy... mundane, and machine like. Its a machine that remains mysterious in many ways, but its easily experimented upon and we can discover more and more from it.
-----------
In any case, AI isn't much enlightening at all. AI isn't how human brains work anyway... no more than how an airplane wing is related to bird wings.
If you want to discover and understand the human mind, with the human errors and human fallacies, you should study humans. Medical science, neurology.
That being said: we don't actually think with our brains all the time. A lot of our emotional state comes from our blood, chemistry, lymph nodes and more. Overemphasis on the neural network will leave you blind to other psychological mechanisms.
Our fingertips contain a substantial number of neurons, as our nervous system spreads throughout our body. The delineation of "brain" vs "spinal cord" vs "nerves" is very ambiguous, it flows. Honestly, a big problem with ourselves is that our Hollywood / Story understanding of ourselves and our own brains is complete bollocks.
It would do you well to study psychology, at least the elementary levels, to understand yourself more. If you're worried about existentialism or trying to find an understanding of self, I don't think looking at machines or computers is very helpful. But maybe that's just me.
In any case, please don't look to movies or stories. These are shared experiences, yes, but full of ancient 100+ year old psychology information that's largely been debunked by today's science. (Again: the first hundred years of psychology can barely be called a "science" or "art". It was more pseudo-science bullshit than real science. But you sometimes got good records of people like Phineas Gage and other forbidden experiments by today's ethical standards).