Of course there are cases where you can tell that some text is almost certainly LLM output, because it matches what ChatGPT might reply with to a basic prompt. You can also tell when a piece of writing is copied and pasted from Wikipedia, or a copy of a page of Google results. Would any of that somehow be more worth reading if the author posted a video of themselves carefully typing it up by hand?
1: You're assuming a specific type of output in a specific type of context. If LLM output were never worth reading, ChatGPT would have no users.