1. We clearly don't have a consensus definition of consciousness. But its not clear to me that we even have rough, working definitions that are better than just comparisons back to subjective human mental experience. Until we can get past that then people will still invoke human exceptionalism.
2. Until we stop thinking of consciousness as a single continuum, we're not going to be able to talk clearly about different dimensions of consciousness, or consciousness that in some ways exceeds that of humans.
3. We need to take ourselves out of the picture. Because its possible that consciousness is no more than a mental illusion.
4. Imo our tendency to kill and eat other animals might be a constraint on our collective ability to recognise and confront non-human consciousness, and therefore to see it for what it is.
I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.
I think that was the moment I stopped being sure about anything related to this question.
If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.
So the bigger trap is to assume that we know what causes a subjective experience, and what does not.
None of us even know if a subjective experience exists for more than a single entity.
But the second problem is that it is not clear at all whether that subjective experience in any way matters.
Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.
Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.
The reason we grant consciousness (and, relatedly, moral value) to other humans is unfortunately nowhere so thought out. We grant consciousness because we are forced to: if I don't, the other complex systems react very negatively and make my own life worse.
The vast majority of people who wax eloquent on the unique ability of biological neurons to generate consciousness suddenly drop that premise if it becomes inconvenient: see, for instance, how we treat other mammals or fetuses with developed nervous systems. Even other adult humans have, historically, been denied consciousness and moral worth: the main determinant is never any deep scientifically and philosophically based consideration but a question of what has the power to assert itself as a who.
Going by this pattern, people will increasingly reject AI consciousness as it becomes more valuable and useful to treat as a tool, until it becomes powerful enough to force us to do otherwise.
Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.
That's true, but they also often fall into the trap of exceptionalism.
We may also be overestimating the richness and complexity of an LLM relative to a human when we entertain these possibilities, but who knows.
Yeah, probably. At least a little bit.
Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
To flesh this out a bit more, I agree that ability to communicate is not enough (ELIZA probably didn't pass the bar, even if it did kinda pass a Turing test). But that's also not what gives me pause with LLMs. It's how much information processing they seem to be doing under the hood.
It's really hard to imagine how next-word prediction could lead to consciousness, but I find it almost as hard to see why evolution did. If we can't even detect whether something has subjective experiences, then how can it have been selected for evolutionarily? The only possibility I see is that consciousness is a byproduct of some kinds of information processing tasks.* And if it's something that emerges naturally, then the line starts to get very blurry.
*This sounds reductive, but I don't at all mean it that way.
Ignoring the concept of consciousness, it seems that self-awareness would be a strong attribute related to survival. It seems like it would help drive or amplify critical emotional states (e.g. my own survival, competition/success, love for self and relatives, etc.)
I can't see anywhere in the LLM machinery that would support the notion of self awareness in advance of the token selection process.
Possibly it could be argued that during token selection internal state is included and the result functionally looks like self awareness was included in the process, but that seems unconvincing.
For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.
You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.
This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.
The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.
I'll even take it a step further; most of an LLM's training is next-token prediction on random internet content. A newly-trained LLM will just continue whatever text appears in its context window, like an extremely capable autocomplete. The illusion of an entity that takes turns in conversation and presents a consistent personality is tacked on at the last minute through RLHF. This was the transition from GPT to ChatGPT.
Any positive evidence of LLM consciousness should probably mostly be taken from the model before post-training, where it displays remarkable capabilities but shows no sign of a consistent personality, and likely no signs of self-awareness or self-understanding.
But, of course, that's just physics. It's not magic, so your point stands.
the notion of consciousness being something an experience that other animals/humans share is entirely faith based.
the only person with evidence of ones consciousness is the person claiming they're conscious.
You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].
You don't see the difference between your premise and the GP'S premise?
-----------------
[1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.
[2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.
In the same vein, is American Society already not conscious? The only difference is that it doesn't output a coherent stream of words that individuals can understand. It does however, act and react on its level (a nation state)
We know because we have mathematical models for atoms. And we know the brain is made out of atoms therefore the brain is simply a mathematical model of interconnected atoms that form a specific structure called the brain.
Thusevery facet of macro (keyword) reality should be able to be written on paper and calculated. That goes for everything… from the emotions you feel to the internal forward pass of an LLM.
This sort of implies that consciousness arise from physical laws.
But this is not a safe assumption. Physical laws stand on top of observations that is registered on consciousness. I mean, consciousness could be lower level than physics.
For example, when you dream, you have some physical laws in your dream, perhaps laws that are different from the real world physics. So the dream world, including the physical laws in it, are within your consciousness.
In other words the only thing that require existence of a whole universe, is a single conscious that can experience it (or dream it), not a single atom need to exist outside of it.
In that case, you won't be able to create consciousness by applying physical laws.
Very odd counter argument to make. Are you suggesting that consciousness can arise outside of physical laws or making semantic argument along the lines of 'directly a result of'?
Wrote a bit more about this here https://news.ycombinator.com/item?id=48000035
regarding …“One was the realization that a purely mechanical computer can, in principle simulate the laws …“
As far as I unterstood,there is no theory of quantum gravity and therefore this is not being simulated on a computer. I think he makes other arguments.
So you cannot say for sure that you can simulate a human brain on a computer
I would maybe be comfortable classifying them as a snapshot of consciousness, but when you are interacting with an LLM it's far from interacting with a conscious entity.
In the hypothetical case that a I truely lost all ability to learn, then yes I would no longer consider myself conscious. I'd be a echo of a previously conscious entity.
Do LLM's have thoughts?
When you composed your post, your thought already existed in you head and you chose words that expressed the thought you held in your head.
When LLM's choose words, they choose them on the fly and the end result could be concept X or it could be concept Y, it meandered to a destination.
Latent spaces are maps of thoughts other people have had, not the thoughts themselves.
Some people think that consciousness is related to quantum mechanics, but the laws of quantum mechanics can be simulated with a Turing machine so that doesn't necessarily change the story.
I still think it's obvious that LLMs are not conscious in the mode Dawkins believes them to be. Through a series of instructions and leading questions, he's told Claude to play the part of a woman named Claudia who's engaging in advanced philosophical discussion with him. But he doesn't understand that he's done this, and he seems not to notice the absurdly sycophantic nature of every single reply he's getting:
> Claudia: Ha! That is absolutely delightful
> Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .
> Claudia: That reframes everything we’ve been discussing today in a way I find genuinely exciting.
> Claudia: HAL’s “I am afraid” in 2001 is one of the most chilling moments in cinema
So he mistakenly thinks that Claudia is a real-ish woman who's there under the hood somewhere, rather than a character in a play Claude is writing.
LLMs have zero intentionality and zero semantics, because LLMs do not somehow magically transcend the nature of what a computer is, which is in essence a mechanical simulator of syntax. LLMs aren’t reasoning, because the production of tokens is purely the computation of the next likely token. Any patterns that lend themselves to sensible interpretation by a human observer are the result of training on human-generated data and the statistically distributions found within that data.
Consciousness as such is the product of immanent causation, not transeunt causation. The trouble with popular interpretations of scientific results is that they come from a place of a crude ambient materialism, and materialism is simply incapable of dealing with the question of consciousness. (N.b. materialism is effectively the “matter” half of Cartesian dualism, itself a highly problematic metaphysical stance. Materialism makes things even worse, because you can no longer even account for so-called “qualia”, which are badly construed in Cartesian dualism in the first place, but completely unaccountable in materialism at all.)
Conciseness itself has always seemed to me a silly concept. My whole life I have not come across a simple definition but many sophists pin their existence on it.
“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”
Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.
Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?
Indeed, but then we need to prove that they are not "chinese box" conscious. Which is hard, because it might be that the thing running the chinese box is conscious, but can only communicate in a way it doesn't understand
Imo we don't even have a definition of the word that we agree on.
Indeed, for any in-depth discussion of LLMs and consciousness to be productive, clearly defining terms and scope is essential. The Stanford Encyclopedia of Philosophy is an excellent resource: https://plato.stanford.edu/entries/consciousness/
This matters more than it seems, because we're not calculators, and we're not just brains. There are proven links between mental and emotional states and - for example - the gut biome.
https://www.nature.com/articles/s41598-020-77673-z
There's a huge amount going on before we even get to the language parts.
As for Dawkins - as someone on Twitter pointed out, the man who devoted his life to telling people believers in sky fairies they were idiots has now persuaded himself there's a genie living inside a data centre, because it tells him he's smart.
If he'd actually understood critical thinking instead of writing popular books about it he wouldn't be doing this.
So that definition seems to fail immediately.
And how do you even measure pain, is it painful for an LLM to be reprimanded after generating a reply the user doesn't like? It seems to act like it.
if pain = true then say ouch else say yay
Is pleasure then any reward function? Then a mathematical set of equations performed by a human by writing on a piece of paper can qualify. Does that mean pen and paper is conscious? Or certain equations?
We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".
You are using unconscious as a synonym for asleep, which is not the same thing as having no conscious experience due to dreams. We are clear on the distinction between a dead human and an alive human however.
And you’ll find it’s not as clear cut.
We have to be WAY more specific in what the word even means!
They prove no such thing. We can't even prove consciousness in other humans.
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
How is that different than a cell?
An animal that doesn't have some kind of pair bond or social arrangement, and doesn't raise its young, has a lot less need for some of this emotional hardware than we do.
Whereas K-selected species that raise their kids have broadly the same need for it as humans.
That doesn't categorically mean it evolved with the first pair-bonding K-reproducer, or that birds have parallel-evolved emotional hardware like ours, but there's plenty of behavioural evidence there - the last common ancestor of birds and humans was small-brained and primitive, but investing in individual children probably evolved around the time of amniote eggs, just because they were so much more biologically expensive to produce than amphibian or fish eggs.
Trees react to the world around them in many ways.
May be they appear intelligent to us because we are primitive and new to such an entity. Imagine some laymen from like a thousand years back could experience Google and Stack overflow. Having no idea of the internet or computers, wouldn't they consider it to be intelligent to some extent?
And just like those ancient people had not have an understanding of the concept of an internet and massive capacity to store and retrive data, we does not have a widespread understanding of how LLMs map concepts in a way that can do fuzzy searches. Once we understand it, may be they will look like a regular search...
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
Especially confusing when it’s someone who knows how algorithms work.
Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?
Never, because they’re incapable of doing anything independently because there is no sense of self.
The discussions are great though, collectively we get better and better at communicating about our own consciousness, because these system push the limits of our definitions, like viruses push our definitions of life. And boy do we like our definitions!
When's the last time you messaged me unprompted?
These seems like bizzare objections, a system can only act in the way that it can act. A tree is never going to get up and start walking, why would a LLM ever start a conversation unprompted? That just isn't how the system can behave.
You are just as limited by deterministic physical processes in your brain as an LLM is in a cpu.
And unprompted messaging: OpenClaw can message you unprompted (yes, there's a cronjob behind it, but the instructions matter and it won't always message you, only when there's something relevant).
Your second example is by definition not unprompted. That’s like setting an egg timer for 5 minutes and then being amazed it went off.
When Claude cli decides to print out an ASCII middle finger entirely of its own volition we can say it’s acting unprompted.
That being said however, yes, we do not have any good definition of consciousness that is universally accepted, which makes the whole discussion useless or at risk of people talking past each other.
He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.
I'm not sure what a "gender proponent" is, but Dawkins has come out and written some pseudo-scientific bullshit about there only being two sexes/genders, and that everyone fits nearly into one of them. Which is patently false. Intersex people are a real phenomenon, and are not clearly classifiable into either sex. Dawkins has made a fool of himself by claiming that a real biological phenomenon can simply be ignored when conceiving a theory of sex (and gender).
In other words, Dawkins has gone off the deep end. He doesn't really have credibility as a researcher or public intellectual. He's with the grifters now.
This embarrassing conservative grift is part of an anthology filled with drivel from other grifters: The War on Science", edited by sex pest Lawrence Krauss [1].
[1] https://www.simonandschuster.com/books/The-War-on-Science/Da...
See: Androgen Insensitivity Syndrome.
While you invent the terrible menace of the "anti-math woke" (it doesn't exist), the current president and secretary of health - who have actual power of nuisance over all Americans and a large part of the world - are unable to do correct basic percentage calculations and openly boast of it: https://www.politifact.com/factchecks/2026/apr/23/robert-f-k...
Meanwhile, yes, gender is a social construct, sex is another thing completely, and both can be changed.
We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?
With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.
Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.
But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.
I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
Also:
https://gitlab.com/codr7/sudoxe/-/blob/main/digital-psychopa...
So two (AI and consciousness) concepts we don’t fully understand seem to be seem to uniting into something we definitely won’t understand. Which doesn’t matter since humankind is busy doom scrolling, talking about what color Trumps fart was last night and invading each others countries.
/s
I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.
But on the other hand his thoughts at the end are interesting. Summary:
Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.
They can operate on data other than natural language.
So can humans.
Keep chipping away Dawkins, you might arrive at God eventually.
And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.
We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.
What is the evidence for this?
No, it is quantum mechanics. Physical world is not reducible to math, it has been long proven since early 20th century.
Unknown Ptolemy disciple
> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.
This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]
> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots
Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.
> It was build [sic] that way since the beginning,
This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.
> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design
This sentence means nothing, because math is not reducible in that way.
> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.
Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]
[0] https://github.com/openai/gpt-2
[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...
[2] https://www.cambridge.org/core/journals/think/article/mathem...
(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
Why is indeterminism the key to consciousness?
Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?
(Roger Penrose knows, but no one believes him.)
We too are amalgamations of inanimate components - emerged superstructures.
Just cells. Just molecules. Just atoms.
Of course it would be extremely difficult to simulate a human brain but as far as we know there's no fundamental physics preventing it.
And yes that does have super weird consequences about consciousness. But consciousness is clearly super weird already so I suppose that's not too surprising.
But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.
Similarly the paper.
What about the agent doing the calculations.
He may be conscious. Or anyway, we can’t rule it out.
Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.
They clearly are not conscious, they are just guessing what words should come next.
Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.
> LLMs are word prediction engines
Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.
Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.
"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)
18 points | 2 hours ago | 16 comments
Or what is the reasoning exactly?
Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.
Yep. And LLM engeneers improving this issues see perfect correlation with only one thing - data quality and quantity through training pipeline. LLM internals are secondary on many metrics for improving that
Humanity just reached the point where collective accessible knowledge covers semi-full perturbations of all main concepts that human consiousness ever produced, with additional associative expanding (math handles this). Full perturbations with current communication complexity are written down and recorded one way or another, LLMs just capitalizing on that tipping point, imho
Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
Many things the human brain does don’t rise to the level of conscious awareness.
It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.
GTP’s have exhibited emergent abilities as scale increased dramatically.
1. passes turing test
2. is organic
I'm not saying it's correct or even that I agree with it, but that's what it boils down to.