story
Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.
It does, however, have relevance when we consider whether or not other, non-human, entities can have consciousness: If we can't know what consciousness actually mean with respect to humans, that is a strong argument for not insisting that we know whether or not other entities are conscious.
If we then choose to treat other humans purely on the assumption that they e.g. do feel distress the same way, we ought to consider that we do not what the pre-requisite to reach a level of awareness to feel distress is.
But if we accept that we also need to consider that we do not know, and can not know whether other entities are conscious or not either.
We can only tell whether they present as if they are.
And we should consider that when deciding how to treat them.
Furthermore we should be cautious about how high we set the bar.
-- Wittgenstein, probably
The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.
But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.
Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.
Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.
At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.
But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.
One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.
You don’t need to mention this. The context is LLMs I am saying your claim is pointless in context. The subjectivity of others is completely relevant because it is the topic of subjectivity itself that is in question. Get it? You didn’t counter my own counter and instead you moved onto side topics.
> But while I’m here, LLMs do not “display and output the same subjectivity” as human beings.
Again… you are side tracking here and not really responding to me.
The argument solely is within the confines of text. That’s obvious. No need to take it beyond that. You assume I am conscious because of the text your reading from me and I assume the same from you and it is within that same frame we are evaluating the LLM. Nothing beyond that. You can’t in actuality know my experience goes beyond text because that information is not open to you. But it is obvious you assume I’m conscious and not a rock because you are responding to me. So the question is why are you not engaging in a similar debate with the LLM?
> One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win.
It’s not a failure. It was the point. They want to remove superfluous features and gun for the most narrow definition of agi.
You like philosophy and you read texts on the topic. That means you obviously find the subjectivity in those texts relevant and produced by a high intelligence. But that’s all through only text. You evaluate my statements and the statements of your idolized philosophers solely from text and that is all you’ve ever used. So YOU yourself find validation from text as do many humans and that is sufficient evidence in determining whether a thing is conscious and your own behavior validates this logically even though your mouth is constantly moving the goal posts whenever AI jumps over a new hurdle.
That is what the Turing test is gunning for. It used to be that intelligence was just the ability to think and understand now it has expanded to encompass the totality of human sensation because people are refusing to face the reality of impending agi.
When I called your philosophers obsolete is that not the same as you calling the Turing test out dated? We both do it when convenient. Fine… the Turing test is outdated, let’s move the threshold… the new test is when AI is used in our daily lives to do actual tasks only humans could previously do. How long will that new “Turing test” last before more idiots decide we need to move the goal posts again? Let’s jump ahead of that and change the threshold too: when AI discovers new proofs in mathematics. Not good enough? I guess now you can see why it will never be good enough.
To dive into this specific question: to me, there's a better reason than the obvious functional utility of not treating other humans like NPCs. It's in three parts. First, is that I subjectively experience a rich and varied internal mental life (aka qualia). So, I have first-hand evidence that N equals (at least) 1 in terms of qualia existing in humans. Second, there are multiple lines of experimental evidence from fMRI, surgical and brain injury studies which indicate other human brains broadly function in ways similar to my own brain. Third, the consistency of the many self-reports of other humans I know and trust which strongly correlate with consistent reports from humans I've never met and who have little apparent motivation to deceive me (unlike those I know - if I were very paranoid).
This all consistently supports a model of reality in which humans experience qualia broadly similar to my own. So when humans show external behaviors similar to my own, I make the reasonable inference that the internal causal mechanism broadly maps to what I internally experience when showing similar external behaviors (in contexts where the human is credible and has no motivation to be deceptive). The alternatives like "I'm a brain-in-a-vat ala The Matrix" or "I'm the sole subject of a constructed reality like the Truman Show" seem far less likely.
But that's all general 'Philosophy of Mind', the slam dunk is that the question isn't just about humans but about humans compared to LLMs; in short, "Do LLMs experience human-like consciousness?" To me the answer is quite clear for three reasons: 1. LLMs are dramatically different than humans, mammals or even biological entities. They only vaguely emulate a few traits of neurons but otherwise work by different algorithms, at different scale, different speeds, connected in different ways on an entirely different physical substrate. 2. There's far less supporting evidence, and 3. There exists substantial negative evidence.
2. There are only two lines of evidence supporting LLM consciousness and the first is largely circumstantial, that a) LLMs possess some abilities previously only seen in humans. Specifically high-level verbal fluidity and linguistic manipulation along with instantly accessing a vast and diverse breadth of pre-trained information using a wide variety of non-linear relationships across many dimensions. While that ability is shockingly impressive, completely novel and can be quite useful, it's still only vaguely circumstantial because replicating some previously human-only abilities isn't evidence for the existence of other human traits like consciousness/qualia. However, LLMs are remarkably misleading for humans to reason about because the nature of LLMs essentially hacks our highly-evolved "judging intelligence/consciousness" heuristics. I'd argue we couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.
b) The second line of supporting evidence for LLMs is that they generate text which can describe internal subjective experiences much like a human would (as seen in the Dawkins / Claude transcript). Unfortunately, this isn't convincing because we know that LLMs were trained on human sample text to be 'imitation machines'. The algorithms were designed, tuned and tested to generate text output statistically optimized to plausibly simulate how a composite human would respond to the same input (including the invisible system prompt instructing: "You are a Large Language Model, not a human"). We even added a tiny degree of random variability to the processing of the statistical weights because we found that makes the simulation seem a bit more plausibly like what a composite human would say. In short, LLM 'self-reports' cannot be taken at face value any more than the performance of an actor we've hired to pretend something and strongly incentivized to never break character. Note: knowing this should elevate our skepticism to maximum. We're assessing an algorithmic system, designed and iteratively optimized across millions of generations to convincingly simulate the output of something different than what it innately is.
3. But to me the real clincher is the negative evidence against LLM consciousness/qualia. Unlike the philosophical puzzles around trusting human subjectivity, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with a fair degree of confidence that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).
And that's why the case for human subjectivity is so much stronger than the frankly flimsy case for LLM subjectivity.
Exact same reasoning for me. But none of this invalidates the speculation that LLMs are conscious. The question was more rhetorical. It was to illustrate via self examination the amount of unreliable evidence you use validate the consciousness of other people. You have a sample size of one for yourself and you use fmris (which actually provide extremely little understanding of the human brain) as evidence of similarity therefore even though the fmri provides no evidence of consciousness if that thing it is reading is similar to my brain then maybe said thing is conscious. That's probably the best evidence available but it is also extremely weak evidence.
The rest of your argument is talking about self reports from other people who are "similar" to you which is similar to the fmri argument in which the fmri invokes similar patterns and people describe similar patterns of experience to you... which is weak.
The overall point is you come to your conclusion based off of weak evidence so the LLM is no different. It talks like us, it understands us, you don't know anything else about it... how do you know it's not conscious? All evidence, (albeit weak evidence) actually leans towards it is conscious and that is the same amount of evidence we have for people.
Strong evidence would be determining the formal definition of consciousness and demonstrating logically and categorically that humans fit the definition. But we have none of that for either the human or the LLM.
>Do LLMs experience human-like consciousness?
No that is not the question. No one in actuality believes this. The question is Do LLMs experience consciousness that fits the definition or our own intuition of what consciousness is. It's fundamentally clear to everyone that the LLM runs off of a very different architecture then a human.
>2. There are only two lines of evidence supporting LLM consciousness and the first is largely circumstantial,
Many lines of evidence exist. All circumstantial and all no different from the circumstantial evidence you posted yourself for humans.
>a) LLMs possess some abilities previously only seen in humans. Specifically high-level verbal fluidity and linguistic manipulation along with instantly accessing a vast and diverse breadth of pre-trained information using a wide variety of non-linear relationships across many dimensions. While that ability is shockingly impressive, completely novel and can be quite useful, it's still only vaguely circumstantial because replicating some previously human-only abilities isn't evidence for the existence of other human traits like consciousness/qualia
This is not very good evidence at all. language follows rules. The rules are complicated and hard to replicate but replication of said rules do not indicate that it is conscious and it "knowing language" does not fit our intuition of what is conscious. If you think that this is the basis the reasoning of people who speculate that it is conscious then you are extremely wrong. The reasoning is much deeper than this. I feel a lot of people like you sort of classify the other side as mere simpletons who have not yet at all considered all the basic details.
>I'd argue we couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.
Valid argument. But then I'd argue it is possible that it plays the Imitation game to the extent where it actually imitates consciousness by actualizing real consciousness. You can't say it doesn't.
>b) The second line of supporting evidence for LLMs is that they generate text which can describe internal subjective experiences much like a human
You seem to be answering a question no one is arguing with you about. Again. No one claims LLMs are human. No one claims they experience consciousness the way humans experience it. The claim is they experience consciousness in the way our intuition defines it INDEPENDENT of the human centric experience.
> In short, LLM 'self-reports' cannot be taken at face value any more than the performance of an actor we've hired to pretend something and strongly incentivized to never break character.
This is not true. We have proof of LLMs telling the truth and being right. Just because an LLM lied in one instance doesn't mean it lies all the time. But humans lie too so it goes both ways.
>3. But to me the real clincher is the negative evidence against LLM consciousness/qualia. Unlike the philosophical puzzles around trusting human subjectivity, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with a fair degree of confidence that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).
This is extremely false. Mechanist interpretability to the LLM is as what an FMRI is to the human brain. It is a blunt tool that provides us a very high level view of the what's going on. This is categorically true for humanity right now: We do not understand why an LLM does what it does. Some sources to confirm that:
https://www.reddit.com/r/PiAI/comments/1m3krp1/godfather_of_...
https://www.techrepublic.com/article/news-anthropic-ceo-ai-i...
It's funny how you cited Mechanistic Interpretability without understanding what exactly was interpreted. You just took their word for it without understanding what's going on yourself. Well I'm here to tell you that there isn't any actual understanding of the LLM because if there was... we'd be able to use mechanistic interoperability to categorically determine whether or not LLMs are conscious. Someone would have proved it. The fact that we are having this debate literally means mechanistic interpretability provides nothing definitive.