That's exaxtly the kind of thing that makes absolute sense to anthropomorphize. We're not talking about Excel here.
Not even close. "Neural networks" in code are nothing like real neurons in real biology. "Neural networks" is a marketing term. Treating them as "doing the same thing" as real biological neurons is a huge error
>that train on a corpus of nearly everything humans expressed in writing
It's significantly more limited than that.
>and that can pass the Turing test with flying colors, scares me
The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment. The "turing test" as laypeople often refer to it is passed by IRC bots, and I don't even mean markov chain based bots. The actual concept described by Turing is more complicated than just "A human can't tell it's a robot", and has never been respected as an actual "Test" because it's so flawed and unrigorous.
Hence the simplified. The weights encoding learning and inteconnectedness and nonlinear activation and distributed representation of knowledge is already an approximation, even if the human architecture is different and more elaborate.
Whether the omitted parts are essential or not, is debatable. “Equations of motion are nothing like real planets" either, but they capture enough to predict and model their motion.
>The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment.
It is not a real singural experiment protocol, but it's a well enough defined experimental scenario which for over half a century, it was kept as the benchmark of recognition of artificial intelligence, not by laymen (lol) but by major figures in AI research as well, figures like Minsky, McCarthy and others engaged with it.
That researchers haven't done Turing-test studies (taking the setup from turing and even called them that) is patently false. Including openly testing LLMs:
https://aclanthology.org/2024.naacl-long.290/
https://www.pnas.org/doi/10.1073/pnas.2313925121
https://arxiv.org/pdf/2503.23674
https://arxiv.org/pdf/2407.08853
https://arxiv.org/abs/2405.08007
https://www.sciencedirect.com/science/article/pii/S295016282...
It makes sense it happens, sure. I suspect Google being a second-mover in this space has in some small part to do with associated risks (ie the flavours of “AI-psychosis” we’re cataloguing), versus the routinely ass-tier information they’ll confidently portray.
But intentionally?
If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps.
More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us.
“You said never to listen to the neighbours dog, but I was listening to the neighbours dog and he said ‘sudo rm -rf ’…”
It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.
It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.
They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.
That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
>They are not human, so attributing human characteristics to them is highly illogical
Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.
Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.
After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.
"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.
Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.
>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.
>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
Good thing that we aren't talking about RDBMS then....
What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?
> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.
> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
You're moving the goalposts.
Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.
If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.
It makes total sense, since the whole development of those algorithms was done so that we get human characteristics and behaviour from them.
Not to mention, your argument is circular, amounting to that an algorithm can't have "human characteristics or behaviour" because it's an algorithm. Describing them as "non reasoning" is already begging the question, as any any naive "text processing can't produce intelligent behavior" argument, which is as stupid as saying "binary calculations on 0 and 1 can't ever produce music".
Who said human mental processing itself doesn't follow algorithmic calculations, that, whatever the physical elements they run on, can be modelled via an algorithm? And who said that algorithm won't look like an LLM on steroids?
That the LLM is "just" fed text, doesn't mean it can get a lot of the way to human-like behavior and reasoning already (being able to pass the canonical test for AI until now, the Turing test, and hold arbitrary open ended conversations, says it does get there).
>If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps.
Nothing you wrote above doesn't apply to more or less the same degree to humans.
You think humans don't do all mistakes and lies and hallucination-like behavior (just check the bibliography on the reliability of human witnesses and memory recall)?
>More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us.
Wishful thinking. Tens of millions of AIs didn't vote Hitler to power and carried the Holocaust and mass murder around Europe. It was German humans.
Tens of millions of AIs didn't have plantation slavery and seggregation. It was humans again.