Pretraining has given the LLM a huge set of lego blocks that it can assemble in a huge variety of ways (although still limited by the "assembly patterns" is has learnt). If the LLM assembles some of these legos into something that wasn't directly in the training set, then we can call that "novel", even though everything needed to do it was present in the training set. I think maybe a more accurate way to think of this is that these "novel" lego assemblies are all part of the "generative closure" of the training set.
Things like generating math proofs are an example of this - the proof itself, as an assembled whole, may not be in the training set, but all the piece parts and thought patterns necessary to construct the proof were there.
I'm not much impressed with Karpathy's LLM autoresearch! I guess this sort of thing is part of the day to day activities of an AI researcher, so might be called "research" in that regard, but all he's done so far is just hyperparameter tuning and bug fixing. No doubt this can be extended to things that actually improve model capability, such as designing post-training datasets and training curriculums, but the bottleneck there (as any AI researcher will tell you) isn't the ideas - it's the compute needed to carry out the experiments. This isn't going to lead to the recursive self-improvement singularity that some are fantasizing about!
I would say these types of "autoresearch" model improvements, and pretty much anything current LLMs/agents are capable of, all fall under the category of "generative closure", which includes things like tool use that they have been trained to do.
It may well be possible to retrofit some type of curiosity onto LLMs, to support discovery and go beyond "generative closure" of things it already knows, and I expect that's the sort of thing we may see from Google DeepMind in next 5 years or so in their first "AGI" systems - hybrids of LLMs and hacks that add functionality but don't yet have the elegance of an animal cognitive architecture.