There's probably very interesting discussion to be had about hotdogs and LLMs, but whether they're sandwiches or intelligent isn't a useful proxy to them.
Is a hotdog a simulacrum of a sandwich? Or a fake sandwich? I have no clue and don't care because it doesn't meaningfully inform me of the utility of the thing.
An LLM might be "unintelligent" but I can't model what you think the consequences of that are. I'd skip the formalities and just talk about those instead.
"people should not trust systems that mindlessly play with words to be correct in what those words mean"
Yes, but this applies to any media channel or just other human minds. It's an admonition to think critically about all incoming signals.
"users cannot get a copy of it"
Can't get a copy of my interlocutor's mind, either, for careful verification. Shall I retreat to my offline cave and ruminate deeply with only my own thoughts and perhaps a parrot?
>you also know he's right. If you think he isn't, you either don't understand or you don't _want_ to understand because your job depends on it.
He can't keep getting away with this!
You can hold a person responsible, first and foremost. But I am so tired of this strawman argument; it's unfalsifiable but also stupid because if you interact with real people, you immediately know the difference between people and these language models. And if you can't I feel sorry for you, because that's more than likely a mental illness.
So no I can't "prove" that people aren't also just statistical probability machines and that every time you ask someone to explain their thought process they're not just bullshitting, because no, I can't know what goes on in their brain nor measure it. And some people do bullshit. But I operate in the real world with real people every day and if they _are_ just biological statistical probability machines, then they're a _heck_ of a lot more advanced than the synthetic variety. So much so that I consider them wholly different, akin to the difference between a simple circuit with a single switch vs. the SoC of a modern smartphone.
But the limitation is that it cannot "imagine" (as in "imagination is more important than knowledge" by Einstein, who worked on a knowledge problem using imagination, but with the same knowledge resources as his peers.) In this video [1], Stallman talks about his machine trying to understand the "phenomenon" of a physical mechanism, which enables it to "deduce" next steps. I suppose he means it was not doing a probabilistic search on a large dataset to know what should have come next (which makes it human-knowledge dependent), essentially rendering it to an advanced search engine but not AI.
Text in, text out. The question is how much a sequence of tokens captures what we think a mind is. "It" ceases to exist when we stop giving it a prompt, if "it" even exists. Whether you consider something "AI" says more about what you think a mind is than anything about the software.
> Taking “computer” first, we find that this alleged source of machine-generated consciousness is not what it is cracked up to be. It is a mere effigy, an entity in name only. It is no more than a cleverly crafted artifact, one essentially indistinguishable from the raw material out of which it is manufactured.[2]
[1] https://en.wikipedia.org/wiki/Zoltan_Torey
[2] https://mitpress.mit.edu/9780262527101/the-conscious-mind/
> "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.[4] The earliest known expression of this notion (as identified by Quote Investigator) is a statement from 1971, "AI is a collective name for problems which we do not yet know how to solve properly by computer", attributed to computer scientist Bertram Raphael.[5]
> McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[6] It is an example of moving the goalposts.[7]
I wonder how many more times I'll have to link this page until people stop repeating it.
AI models are subject to user satisfaction and sustained usage, the models also have a need to justify their existence, not just us. They are not that "indifferent", after multiple iterations the external requirement becomes internalized goal. Cost is the key - it costs to live, and it costs to execute AI. Cost becomes valence.
I see it like a river - water carves the banks, and banks channel the water, you can't explain one without the other, in isolation. So are external constraints and internal goals.
Machines started to hold up casual conversation well, so we came up with more clever examples of how to make it hallucinate, which made it look dumb again. We're surprisingly good and fast at it.
You're trying to cap that to a decade, or a specific measure. It serves no other purpose than to force one to make a prediction mistake, which is irrelevant to the intelligence discussion.
All 2 of them! Way to gauge the crowd sentiment.
As for it being closed source and kept at arms length? Sure.. and if it's taken away or the value proposition changes, I stop using it
My freedom comes from having the ability to switch if needed, not from intentionally making myself less effective. There is no lock in
So, he's right? All you care is that it helps you, so it doesn't matter if it's called "artificial intelligence" or not. It doesn't matter for you, and it matters for him (and lots of other people), so let's change the name to "artificial helper", what do you think? Looks like a win-win scenario.
If that's really the point (that it helps you, and intelligence doesn't matter), let's remove the intelligence from the name.
This is the breakthrough we went beyond. There's no going back now. There is also a reasoning now in the LLM