It can, but that does not mean that what is generate is not new, unless the rules in question constrains the set to the point where onely one outcome is possible.
If I tell you that a novel has a minimum of 40,000 words, it does not mean that no novel is, well, novel (not sorry), just because I've given you rules to stay within. Any novel will in some sense be "derived from" an adherence to those rules, and yet plenty of those novels are still new.
The point was that by describing a new language in a zero-shot manner, you ensure that no program in that language exists either in the training data or in the prompt, so what it generates must at a minimum be new in the sense that it is in a language that has not previously existed.
If you then further gives instructions for a program that incorporates constraints that are unlikely to have been used before (but this is harder) you can further ensure the novelty of the output along other axes.
You can keep adding arbitrary conditions like this, and LLMs will continue to produce output. Human creative endeavour is often similarly constrained to rules: Rules for formats, rules for competitions, rules for publications, and yet nobody would suggest this means that the output isn't new or creative, or suggest that the work is somehow derivative of the rules.
This notion is setting a bar for LLMs we don't set for humans.
> That's pretty much what LLM training does, extracting patterns from a huge corpus of text. Then it goes one to generate according to the patterns.
But when you describe a new pattern as part of the prompt, the LLM is not being trained on that pattern. It's generating on the basis of interpreting that what it is told in terms of the concepts it has learned, and developing something new from it, just as a human working within a set of rules is not creating merely derivative works just because we have past knowledge and have been given a set of rules to work to.
> It can not generate a new pattern that is not a combination of the previous one.
The entire point of my comment was that this is demonstrably false unless you are talking strictly in the sense of a deterministic view of the universe where everything including everything humans do is a combination of what came before. In which case the discussion is meaningless.
Specific models can be better or worse at it, but unless you can show that humans somehow exceed the Turing computable there isn't even a plausible mechanism for how humans could even theoretically be able to produce anything so much more novel that it'd be impossible for LLMs to produce something equally novel.