It's amusing, but when it comes to doing actually work, I just
don't care if my LLM fails things like this.
I'm not trying to trick it, so falling for tricks is harmless for my use cases. Does it write quality, secure code? Does it give me accurate answers about coding/physics/biology. If it gets those wrong, that's a problem. If it fails to solve riddles, well, that'll be a problem iff I decide to build a riddle solver using it.