But people do have entire belief systems built around humans having a special position in the world.
In a way it's saying the same thing that you and I have said, just a bit more generally and eloquently.
In another way it's wisely asking us, "so what?".
What I disagreed with was not Dijkstra, but applying it to AI today given that whether or not you think that there shouldn't be anything interesting about it even with AI, the social context means that there very clearly is whether or not you think peoples beliefs around it are reasonable.
To rephrase: At the time, computers unambiguously did not in any way get even close to the line, just like a sub gets nowhere close to replicating swimming. That made the question ludicrous and the comparison a good illustration.
Today there is ambiguity with computers, but no more ambiguity with respect to subs, and that ambiguity is such that it matters deeply to a lot of people in a way the question of subs swimming never will even if you close that gap. As such the comparison has lost its utility.
When I read it I assume that it is a retort to very similar hysterical sentiments to the ones we see today. I don't read it as a sub swimming being as ridiculous as a computer thinking, but rather that the question "is that machine swimming or not?", much like in the robot walking example, isn't particularly valuable.
We don't break a sweat when we say that a robot is walking, because we don't care about walking. We haven't internalised it as the final frontier of humanness. I read the quote as saying that whether a computer can think or not should be as pointless a question as whether or not a robot can "walk".
I'm intrigued enough now to try to hunt down the context.