> you could simply chop away everything except the most basic linguistic functions and claim you are a non-native preteen [...] You're 5 years old - and simply respond by randomly pounding various keys on the keyboard on occasion. Boom - didn't see that coming, now did ya Turing?
Then the real human B would, on average, offer far more compelling evidence of personhood and the bot would fail the majority of the time. I don't see how this issue affects Turing's proposed version of the experiment.
> The issue is doing exactly what you're doing here and creating worthless goalposts to begin with
Claims from skeptics that "machines fundamentally cannot do X without real intelligence" are relatively easy to come by even now, which creates goalposts for intelligence by contrapositive (¬I => ¬X, so X => I).
For me Turing's test is interesting because fully solving it implies achieving all (or at least, a very large class of) observable "X"s to the degree that current humans are capable of. If playing chess truly required intelligence, you could feed in chess moves and a machine that cannot play chess would (over a large enough experiment, so you get people who can and cannot play chess) offer less evidence than the average person.
I believe the overall impact is a push towards either "something can behave exactly like it is intelligent without being intelligent" or "machines can be intelligent". Both are interesting and I feel increasingly common viewpoints.
> Because it's too hard? Well obviously - that's why it's a goal, and not next month's scrimmage point!
Because the goal should be meaningful - "find the factors of this absurdly large coprime" doesn't really say all that much about intelligence, and many other tests would only cover one particular idea of what intelligence is.