Thanks for adding the quote, that is a different part of the post than I was focusing on.
I still think that's a far cry from deterministically recognizing LLM-generated text. At least the way I would understand that would be an algorithmic test with very low rates of both false positives and false negatives. Instead I understood the OP to be saying that people have an intuitive sense of LLM generated text with a relatively low false negative rate.
I am certain that the skill varies widely between individuals, but in principle there is no reason to suspect that with training humans could not become quite good at recognizing low effort (no attempt at altering style) LLM generated content from the major models. In principle it is no different than authorship analysis used in digital forensics, a field that shows fairly high accuracy under similar conditions.