story
Well said. The model that LLM has is very simple: If text X precedes current conversation then the most likely continuation of discussion is, according to the model held by LLM, Y. Right?
So the point is LLM does not create models. It has only a single model based on probabilities of text-sequences, created by its programmers. So it can (mostly?) only solve the problem of what would be a good textual response to an earlier text. It can do it well but most difficult problems don't fall into that category of "having a great chat".