It seems clear to me therefore that further improvements in programming ability will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers. That is, the more types of errors that can be caught by the compiler, the better chance of the AI fuzzing its way to a good overall solution. Interestingly, I hear anecdotally that current LLMs are not great at writing Rust, which does have an advanced type system able to capture more types of errors. That’s where I’d focus if I was working on this. But we should be clear that the improvements are already largely coming via symbolic means, not better LLMs.
I wrote some notes about a year ago about the irony of LLMs being considered a refutation of GOFAI when they are actually now firmly recapitulating that paradigm: https://neilmadden.blog/2024/06/30/machine-learning-and-the-...