At this point, it's only people with an ideological opposition still holding this view. It's like trying to convince gear head grandpa that manual transmissions aren't relevant anymore.
Secondly, the scale of investment in AI isn't so that people can use it to generate a powerpoint or a one off python script. The scale of investment is to achieve "superintelligence" (whatever that means). That's the only reason why you would cover a huge percent of the country in datacenters.
The proof that significant value has been provided would be value being passed on to the consumer. For example if AI replaces lawyers you would expect a drop in the cost of legal fees (despite the harm that it also causes to people losing their jobs). Nothing like that has happened yet.
Did Gemini write a CAD program? Absolutely not. But do I need 100% of the CAD program's feature set? Absolutely not. Just ~2% of it for what we needed.
To me, the arguments sound like “there’s no proof typewriters provide any economic value to the world, as writers are fast enough with a pen to match them and the bottleneck of good writing output for a novel or a newspaper is the research and compilation parts, not the writing parts. Not to mention the best writers swear by writing and editing with a pen and they make amazing work”.
All arguments that are not incorrect and that sound totally reasonable in the moment, but in 10 years everyone is using typewriters and there are known efficiency gains for doing so.
The only justification for that would be "superintelligence," but we don't know if this is even the right way of achieve that.
(Also I suspect the only reason why they are as cheap as they are is because of all the insane amount of money they've been given. They're going to have to increase their prices.)
The cost is just not worth the benefit. If it was just an AI company using profits from AI to improve AI that would be another thing but we're in massive speculative bubble that ruined not only computer hardware prices (that affect every tech firm) but power prices (that affect everyone). All coz govt want to hide recession they themselves created because on paper it makes line go up
> I used to type out long posts explaining how LLMs have been enormously beneficial (for their price) for myself and my company.
Well then congratulations on being in the 5%. That doesn't really change the point.
You’re making a lot of confident statements and not backing them up with anything except your feelings on the matter.
So you expect to see the results of that. The AAA games being released faster, of higher quality, and at a lower cost to develop. You expect Microsoft (one of the major investors and proponents) to be releasing higher quality updates. You expect new AI-developed competitors for entrenched high-value software products.
If all that was true, it doesn't matter what people do or don't argue on the internet, it doesn't matter if people whine, you don't need to proselytize LLMs on the internet, in that world people not using is just an advantage to your own relative productivity in the market.
Surely by now the results will be visible anyway.
So where are they?
LLMs are indeed currently an iterative improvement. I've found a few good use-cases for them. They're not nothing.
But at the moment, they are nowhere near the "massive productivity multiplier" they're advertised to be. Just as adding more lanes doesn't make traffic any better, perhaps they never will.
Or perhaps all the promises will come true -- and that, of course, is what is actually meant when the productivity gains are screamed from the rooftops. It was the same with computers, and it was the same with the internet: the proposed massive changes were going to come at some vague point in the future. Plenty of people saw those changes coming even decades in advance; reason from first principles and extrapolate the results of x scale and y investment and you couldn't not see where it was headed, at least generally.
The future potential is being sold in much the same way here. That'd be all fine and good except for the fact that the capex required to bring this potential future into being compared to any conceivable revenue model is so completely absurd that, even putting aside the disruptive-at-best nature of the technology, making up for the literal trillions of dollars of investment will have to twist our economic model to the point of breaking in order to make the math math. Add in the fact that this technology is tailor-made to not just disrupt or transform our jobs but to replace workers should this future potential arrive, and suddenly it looks nothing like computers in the 70s or networks in the 80s. It's not wonder not everyone is excited about it -- the dynamic is, at its very core, adversarial; its very existence states the quiet part of class warfare out loud.
Which brings us to so many people being forced to use it. I really, really hate this. Just as I don't want to be told which editor/IDE to use, I don't want to be told how to program. I deeply care about and understand my workflow quite well, thank you very much -- I've been diligently working on refining it for a good while now. And to state the obvious: if it were as good as they say it is, I'd be using it the way they want me to. I don't, because they just aren't that good (thankfully I have a choice in this matter -- for now). I also just don't like using them while programming, as I find them noisy and oddly extraverting, which tires me out. They are antithetical to flow. No one ever got into a flow state while pair programming, or managing a junior developer, and I doubt anyone ever got into a flow state while chatting with an LLM. It's just the wrong interface. The "better autocomplete" model is a better interface, but in practice I just haven't seen it do better than a good LSP or my own brain. At best it saves me a few key strokes, which I'd hardly call revolutionary. Again, not nothing, but far from the promise. We're still a very long way off.
To get there, LLM developers need cash, and they need data. Companies are forcing LLMs into every nook and cranny of so many employees' workflows so that they can provide training data, and bring that potential future one step closer to reality. The more we use LLMs, the more likely we are to being replaced. Simple as that.
I for one would welcome our new robot overlords if I had any faith that our society could navigate this disruption with grace and humanity. I'd be ecstatic and totally bullish on the tech if I felt it were ushering in a Star Trek-like future. But, ha, nope -- any faith I had in that sort of response died with how so many handled Covid, and especially when Trump was elected for a second time. These two events destroyed my estimation of humanity as a cooperative organism.
No, I now expect humanity at large -- or at least the USA -- to look at the stupidest, most short-sighted, meanest option possible and enthusiastically say "let's do that!" Which, coincidentally, is another way of describing what is currently happening with LLMs: the act of forcing mediocre tools down our throats while cynically exploiting our "language = intelligence" psychological blind-spot, raising utilities prices (how is a company's electric bill my problem again?), killing personal computing, accelerating climate change at the worst possible time, all in the name of destroying both my vocation and avocation.
You could easily have a side application that people could enable by choice, yet its not happening, we have to roll with this new technology, knowing that its going to make the world a worse place to live in when we are not able to chose how and when we get our information.
Its not just about feeling threatened. its also about feeling like I am going to get cut off from the method I want to use to find information. I don't want a chat bot to do it for me, I want to find and discern information for myself.
Workers hate AI, not just because the output is middling slop forced on them from the top but because the message from the top is clear - the goal is mass unemployment and concentration of wealth by the elite unseen by humanity since the year 1789 in France.
None of these are tech jobs, but we both have used AI to avoid paying for expensive bloated software.
I only use the standard "chat" web interface, no agents.
I still glue everything else together myself. LLMs enhance my experience tremendously and I still know what's going on in the code.
I think the move to agents is where people are becoming disconnected from what they're creating and then that becomes the source of all this controversy.