LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.
If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.
So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.
Maybe if you work e-commerce or in the military.
But how do you even translate this line of thought for today?
Is you EMP defenses up to speed?
Are you studying russian and chinese while selling kidneys in order to afford your retirement home on Mars?
My point being, you can never plan for every worst outcome. In reality you would have a secondary data center, backups and a working recovery routine.
None of which matters if you use autocomplete or not.
Look, we see the forest. We are just not impressed by it.
Having unlimited chaos monkeys at will is not revolutionizing anything.
There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.
The guiding principle of biglaw.
Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.
History majors everywhere are weeping.
When I graduated high school, I had never been or knew anyone who had ever been on the internet at all. The internet was this vague "information superhighway" that I didn't know really what to make of.
If you are of a certain age though you would think a pointless update to react was all the change ever coming.
That time is over and we are back to reality.
Or maybe they just know the nitty-gritty inherent limitations of technology better than you.
(inb4: "LLMs can't have limitations! Wait a few years and they will solve literally every possible problem!")
If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.
In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.
If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.
This doesn't feel like that. The applications of generative AI have become self-evident to anyone that's followed their rise. Specific applications of AI resemble snake oil, and there are hucksters who pivoted from crypto to AI, but the ratio of legit use cases to scams isn't even close.
If anything, the incentives for embellishment have flipped since crypto. VC-funded AI companies will dreamily fire press releases about AI taking us to Mars, but it doesn't have the pseudo-grassroots quality of cryptocurrency hype. The average worker is incentivized to be an AI skeptic. The rise of generative AI threatens workers in several fields today, and has already negatively impacted copywriters and freelance artists. I absolutely understand why people in those fields would respond by calling AI use unethical and criticize the shortcomings of today's models.
We'll see what the next few years hold. But personally, I foresee AI integration ramping up. Even if the models themselves completely stagnate from this point on, there's a lot of missing glue between the models and the real world.
Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.
They can’t write me a safety-critical video player meeting the spec with full test coverage using a proprietary signal that my customer would accept.
I don't need to convince anyone that LLMs are enabling me to do a lot more. This is what makes this hype different. It has bones. Once you've found a way to leverage it, it's undeniably helpful regardless of your prior disposition. Everyone else can say they're not useful and it rings hollow because it obviously is to me. And thus probably useful to everyone else too.
Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.
Then the people who congratulate the AI for helping get yelled at by the other category.