At some point, specialized code-gen transformer models should get really good at just spitting out the lowest level code required to perform the job.
The best code is that which explains itself most efficiently and readably to Whoever Reads It Next. That's even more important with LLMs than with humans, because the LLMs probably have far less context than the humans do.
Developers often fall back on standard abstraction patterns that don't have good semantic fit with the real intent. Right now, LLMs are mostly copying those bad habits. But there's so much potential here for future AI to be great at creating and using the right abstractions as part of software that explains itself.
Fundamentally, computers are a series of high and low voltages, and everything above that is a combination of abstraction and interpretations.
Fundamentally there will always be some level of this-it’s not like an A(G)I will interface directly using electrical signals (though in some distant future it could).
However from what I believe, this current phase of AI (LLMs + Generators + Tools) are showing that computers do not need to solve problems the same way humans need to because computers face different constraints.
So abstractions that programmers utilize to manage complexity won’t be necessary (in some future time).
Future programming language designers are then answering questions like:
"How low-level can this language be while considering generally available models and hardware available can only generate so many tokens per second?",
"Do we have the language models generate binary code directly, or is it still more efficient time-wise to generate higher level code and use a compiler?"
"Do we ship this language with both a compiler and language model?"
"Do we forsake code readability to improve model efficiency?"
There are people who enjoy code for the sake of it, but they're a very, very small group.
Do you ever think twice about the bayer filter applied to your cmos image sensor?