Right.
It sounds to me like you agree and are repeating the comment but are framing as disagreeable.
I'm sure I'm missing something.
I tend to agree with them. What people seem to miss about LLM coding systems, IMO:
a) deciding on the capabilities of an LLM to code after a brief browser session with 4o/claude is comparable to waking up a coder in the middle of the night, and having them recite the perfect code right then and there. So a lot of people interact with it that way, decide it's meh, and write it off.
b) most people haven't tinkered with with systems that incorporate more of the tools human developers use day to day. They'd be surprised of what even small, local models can do.
c) LLMs seem perfectly capable to always add another layer of abstraction on top of whatever "thing" they get good at. Good at summaries? Cool, now abstract that for memory. Good at q/a? Cool, now abstract that over document parsing for search. Good at coding? Cool, now abstract that over software architecture.
d) Most people haven't seen any RL-based coding systems yet. That's fun.
----
Now, of course the article is perfectly reasonable, and we shouldn't take what any CEO says at face value. But I think the pessimism, especially in coding, is also misplaced, and will ultimately be proven wrong.
However, I worry about the premise underlying your reply: a sense that this is somehow incompatible with the viewpoint being discussed.
i.e. it's perfectly cromulent to both think LLMs are and will continue to be awesome at coding, even believe they get much better. And also that you could give me ASI today, and there'd be an incredible long tail of work and processes to reformulate to pull off replacing most labor. It's like having infinite PhDs available by text message. Unintuitively, not that much help. I can't believe I'm writing that lol. But here we are.
Steve Sinfosky had a good couple long posts re: this on X that discuss this far better than I can.
This is a good question, and I worry you won't get a response. Here is a pattern I've observed very frequently in the LLM space, with much more frequency than random chance would suggest:
Bob: "Oh, of course it didn't work for you, you just need to use an ANA (amazing new acronym) model"
Alice: "Oh, that's great, where can I see how ANA works? How do I use it?"
** Bob has left the chat **