They’re not, and will never be in their current form and architecture.
Compilers are mechanical and engineered to produce a correct output. A compiler emitting incorrect machine code is exceedingly rare, and considered a bug. They have heuristics and probabilities in them, but those are to pick between a set of known-good outputs.
An AI is a bag of weights outputting a probability of the most plausible token that follows [1]. It is inherently probabilistic in nature and its output is organic (by design, they’re designed to mimic human speech), as opposed to mechanical like a compiler.
A compiler follows hard rules. An AI does its best.
And to be fair, AIs are no better than human in this regard: humans are pretty bad at generating correct code without mechanical tools to keep them in line (compilers, linters, formatters). It’s not a wonder we use the same tools to keep LLM output in line as we do humans. (And, to be fair, LLMs are better than humans at oneshotting valid code).
[1]: to those that tell me this vision of an LLM is outdated: nope. The heavy lifting is done in the probability generation. Debates about understanding are not relevant here, and the net output of an LLM is a probability vector over raw tokens. This basic description can be contrasted to a compiler whose output is a glorified Jinja template.