It is always the eternal tomorrow with AI.
That's because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don't have rights.
But I don't think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the economic value of the output is already some definition of massive. And those companies are 100% taking the credit (and the money).
Also, almost by definition, every incentive is aligned for people in charge to deny this.
I hate to make this analogy but I think it's absurd to think "successful" slaveowners would defer the credit to their slaves. You can see where this would fall apart.
Bet you feel silly now!
But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike (the man) did -- and also created more problems, with a significantly worse ratio for sure, but the point still stands. I still think it counts as "impressive".
Am I wrong on this? Or if this "doesn't count", why?
I can understand visceral and ethically important reactions to any suggestions of AI superiority over people, but I don't understand the denialism I see around this.
I honestly think the only reason you don't see this in the news all the time is because when someone uses ChatGPT to help them synthesize code, do engineering, design systems, get insights, or dare I say invent things -- they're not gonna say "don't thank (read: pay) me, thank ChatGPT!".
Anyone that honest/noble/realistic will find that someone else is happy to take the credit (read: money) instead, while the person crediting the AI won't be able to pay for their internet/ChatGPT bill. You won't hear from them, and conclude that LLMs don't produce anything as impressive as Rob Pike. It's just Darwinian.
> But I think in the aggregate ChatGPT has solved more problems, and created more things, than Rob Pike did
Other people see that kind of statement for what it is and don't buy any of it.
ChatGPT is only 3 years old. Having LLMs create grand novel things and synthesize knowledge autonomously is still very rare.
I would argue that 2025 has been the year in which the entire world has been starting to make that happen. Many devs now have workflows where small novel things are created by LLMs. Google, OpenAI and the other large AI shops have been working on LLM-based AI researchers that synthesize knowledge this year.
Your phrasing seems overly pessimistic and premature.
The sensible ones do.
> nobody says "ugh look at this argument from authority, you should demand that the doctor show you the reasoning from first principles."
I think you're mixing up assertions with arguments. Most people don't care to hear a doctor's arguments and I know many people who have been burned from accepting assertions at face value without a second opinion (especially for serious medical concerns).
I did code a few internal tools with aid by llms and they are delivering business value. If you account for all the instances of these kind of applications of llms, the value create by AI is at least comparable (if not greater) by the value created by Rob Pike.
But more broadly this is like a version of the negligibility problem. If you provide every company 1 second of additional productivity, while summation of that would appear to be significant, it would actually make no economic difference. I'm not entirely convinced that many low impact (and often flawed) projects realistically provide business value at scale an can even be compared to a single high impact project.
I don't, and the fact you do hints to what's wrong with the world.
And guys don't forget that nobody created one off internal tools before GPT.
i might open source one of those i wrote, sooner or later. it's a simple bridge/connector thingy to make it easier for two different systems to work together and many internal users are loving it. this one in particular might be useful to people outside my current employer.
> And guys don't forget that nobody created one off internal tools before GPT.
moot point. i did this kind of one-off developments before chatgpt as well, but it was much slower work. the example from above took me a couple of afternoons, from idea to deployment.
not sure how you missed Microsoft introducing a loading screen when right-clicking on the desktop...
ChatGPT?