out of distribution tests. if the concept holds consistently over a long period, the concept is the stable thing. If not, then it's only memorizing string densities.
For a number of years we've been basically showing the first to be the case, especially as the model is scaled and the context increases, differentially against the second. String density probabilities can be surprisingly brittle, to be honest. The curse of dimensionality applies to them too, believe it or not, which I believe is why topic discussion, reasoning, and integration over longer distances of text is that differential test that shows pretty clearly that substring memorization/text density stuff is not 'just' what the model is learning. Because mathematically/statistically/from an information density perspective/etc etc otherwise it would be basically impossible, I think.
That's my best understanding, at least.