The shapes of clouds and positions of stars aren't completely random; there is useful information in them, to varying degrees (e.g. some clouds do look like, say, a rabbit, enough that a majority of people will agree). The mechanism at play here with the LLM is completely different; the connection between two dog-inputs and the resulting game barely exists, if at all. Maybe the only signal is "some input was entered, therefore the user wants a game".
If you could have gotten the same result with any input, or with /dev/random, then effectively no useful information was encoded in the input. The initial prompt and the scaffolding do encode useful information, however, and are the ones doing the heavy lifting; the article admits as much.