1
Ask HN: Is there evidence that LLMs can extrapolate to new ideas?
ML is known to be good at interpolating between points in the training set, but does much worse at extrapolating. Both can produce innovation when applied to science. For example, a lot of innovation is about transferring existing ideas from one domain to another.
We start seeing examples of LLMs proving novel math theorems: https://news.ycombinator.com/item?id=48071262. But those could be “interpolation”. Is there any strong evidence for or against LLMs being capable of extrapolating?