Actually yes...If the kid spends their whole life in the box and never invents a new block, that’s just combinatorics. We don’t call a chess engine ‘creative’ for finding novel moves, because we understand the rules. LLMs have rules too, they’re called weights.
I want LLMs to create, but so far, every creative output I’ve seen is just a clever remix of training data. The most advanced models still fail a simple test: Restrict the domain, for example, "invent a cookie recipe with no flour, sugar, or eggs" or "name a company without using real words". Suddenly, their creativity collapses into either, nonsense (violating constraints), or trivial recombination, ChocoNutBake instead of NutellaCookie.
If LLMs could actually create, we’d see emergent novelty, outputs that couldn’t exist in the training data. Instead, we get constrained interpolation.
Happy to be proven wrong. Would like to see examples where an LLM output is impossible to map back to its training data.