The only difference is that imitating a preteen non-native speaker is quite trivial and says very little, which is why you would obviously never select such as the identity. In other words your version of the test doesn't involve solving many "X"s at all. In fact it only requires one - the one which simplifies the domain so much as possible. And as you're strongly implying, but not acknowledging, this was not a meaningful achievement at all.
It's not about how much of a person they are, but how much evidence of personhood responding in a particular way gives. If you were the interrogator and gave some challenge you think only humans can solve, and got back one "asdfghjkl" (no real evidence either way) and one correct answer (evidence in favor of personhood), your beliefs should be adjusted towards believing the latter is the human. Always giving bad answers just because humans can also give a bad answer is already a failing strategy with low success rate when the test is carried out as Turing specified, with no requirement to imitate a specific characteristic added in.
As an analogy that may or may not help: You have two boxes, one containing a rabbit and one containing a turtle. One box is perfectly still, offering little evidence either way (rabbits and turtles can both trivially stay still). The other box is bouncing up and down (something you have reason to believe is difficult for turtles). Which box more likely contains the rabbit?
> In other words your version of the test doesn't involve solving many "X"s at all. In fact it only requires one - the one which simplifies the domain so much as possible.
I think the key you are missing is that it is up against a real human. It does not just have to pass the "well both humans and bots could theoretically respond in this way" mark by giving gibberish answers, but instead get chosen as human when the second player is likely satisfying many of the interrogator's tests.
LLMs are trained on nothing except the corpus of human knowledge. It is literally impossible for them to e.g. accidentally say something that it's inconceivable for a human to say, if we have absolutely no way of constraining the identity of said human. And another way these tests are made easier is by generally having all participants manually type out their interactions, instead of having them transcribed. So the dozens of interactions you'd normally have in 5 minutes goes down to ~5. Making all of this that much more true. It's just silly.
And no, always giving bad answers it not a failing strategy. As I mentioned, the scenario I'm describing is not a hypothetical. The Turing Test (or at least yet another abysmal bastardization of it) was passed in 2014, with the chatbot in question impersonating a 13 year old Ukrainian teen who was not good in English, or maintaining a coherent dialogue, or train of thought, or even being coherent for the most part. I'm sure this is exactly what Turing had in mind. [1]
It'd be possible to get an idea if there was some box movement that was unique to animals. That's not particularly interesting because it's fairly uncontroversial that a box-sized robot could very accurately imitate an animal through the medium of box movement, but for a bot to imitate a human through the medium of text (seen as a sufficiently general interface to test "almost any one of the fields of human endeavour that we wish to include") is interesting to many.
But, the concept the analogy was demonstrating was really just basic reasoning. That, if you're given X xor Y and have evidence of Y, you should tend towards Y even lacking direct evidence for/against X. Do you agree that, in my example, you would choose the box giving some evidence of being a rabbit over the one that gives none?
> LLMs are trained on nothing except the corpus of human knowledge. It is literally impossible for them to e.g. accidentally say something that it's inconceivable for a human to say
Depends on what you mean by "inconceivable", but it's certainly possible for it to say things that it is unlikely for a human to say due to the bot's limitations (at the extreme, consider a Markov chain). And, even if only saying things that a human could just as well say, if those things are also trivial for a bot to say it is poor evidence of personhood.
> And no, always giving bad answers it not a failing strategy. As I mentioned, the scenario I'm describing is not a hypothetical. The Turing Test (or at least yet another abysmal bastardization of it) [...]
To put relevant emphasis on my claims:
> > Always giving bad answers just because humans can also give a bad answer is already a failing strategy with low success rate when the test is carried out as Turing specified
> > Then the real human B would, on average, offer far more compelling evidence of personhood and the bot would fail the majority of the time. I don't see how this issue affects Turing's proposed version of the experiment.
I agree that there are ways to bastardize the test. If for instance you have no second player that you must choose between (have to say A xor B is a bot), then just remaining silent/incoherent to give no information either way can be a reasonable strategy. As with all benchmarks, you also need a sufficient number of repeats such that your margin of error is low enough - fooling a handful of judges does not give a good approximation of the bot's actual rate.
I'd even claim it's a bit of a bastardization to use Turing's 30% prediction (of where we'd be by 2000) to reduce the experiment down to just pass or fail. Ultimately the test gives a metric for which the human benchmark is 50%.
Of course one practical issue that, in some ways, makes this all moot - is that if we ever create genuine AI systems capable of actual thought, the entire idea of a "test" would be quite pointless. Rapid recursive self improvement, perfect memory, perfect calculation, and the ability to think? We'd likely see rapid exponential discovery and advancement in basically ever field of human endeavor simultaneously. It'd be akin to carrying out a 'flying test' after we landed on the Moon.