story
Think about it from an information theory standpoint:
A compiler takes at least the exact amount of information it needs to produce a result, and produces exactly that result every time (unless it's bad at its job or has a bug).
An LLM always takes far less information than would actually be needed to fully describe the desired output, and extrapolates from that. It fetches contexts and such to give itself a glut of assumedly relevant information, but the prompt always contains less information than necessary to produce the code it generates. If it did fully contain enough information, then you've just written a far more verbose version of the program in human language.
No one said that
If you meant they’re now better at mimicking compilers, sure, but they’re only mimicks.
We do not have "perfect" probabilistic transformation, and we probably never will (in part because it's hard to know what exactly that even means), but the gap between the two is shrinking every day.
Ergo:
> they're becoming (effectively) more and more similar every day.