When that is combined with the fact that transformers provably can implement proper deterministic sorting algorithms, it seems that the benefit of the doubt should go to the transformer having learned a sorting algorithm?
LLMs aren't plastic in the sense that they don't learn anything when they aren't being trained. But they can be trained to execute different programs depending on the contents of the context window, like if it contains "wrong, try again:" so maybe they can learn from their mistakes in that sense.
But if you could teach an LLM to sort by explaining it in the context window, the network would already have necessarily learned and stored a sorting algorithm somewhere; the text "here is how sorting is done: [...]" would just be serving as the trigger for that function call.