story
What's the counterargument? What's a less superficial test that we can use instead, which conclusively shows that actually human minds aren't just like very sophisticated LLMs? There isn't one -- this is nothing but the same Chinese room problem which we've been discussing for decades. The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.