>Look inside.
>Written by someone having a stake in LLM business.
Every time.
Whens the last time you saw management tell you which compiler or toolchain you need to use to build your code ? But now we have CEOs and management dictating how coding should be done.
In the article the author admits: "I started coding again last year. But I hadn't written production code since 2012" and then goes on to say: "While established developers debate whether AI will replace them, these kids are shipping.".
Then I ask myself, what are they selling ? and lo and behold, it is AI/ML consulting.
In Sirens of Titan Vonnegut tells a story where governments decided to boost the space industry to drive aggregate demand.
This is exactly what is happening. When you realize that the whole thing is predicated on building and selling more $100,000 GPUs (and the solution to every problem therein is to use even more GPUs), everything really comes into focus.
Asking for a friend.
I suspect this hype cycle won't end until a new one forms, whether technology or some catastrophic event (disease, war) changes focus and allows the same delaying tactics.
Look at stock prices trajectory before and after COVID.
When this bubble bursts, the ensuing chaos will be used in a similar manner.
As an engineer, development still comes down to requirements gathering, solid engineering principles, and the tools we already have at our disposal - network calls, rendering the UI, orchestrating containers and job, etc.
All that is to say that I thought AI was going to be sexy, like Westworld, and not so boring...
Westworld robots are still a long way off, but think about how far we’ve come so quickly.
It’s pretty incredible that natural language computing is now seen as boring when it barely even existed 5 years ago.
It'll never be AGI or superintelligence, it won't create or cause the singularity, and it'll never be a substitute for learning, practicing, and honing skills into mastery. For the fields LLMs do displace in part or in whole, I still expect it'll largely displace the mediocre or the barely-passable, not the competent or experts. Those experts will, once the bubble pops and the hype train derails, find the novel and transformative uses for LLMs outside of building moats for big enterprises or vamping for investor capital.
I especially enjoy the on-prem/locally-run angle, as I think that is where much of the transformation will occur - in places like homes, small offices, or private datacenters where a GPU or two can accelerate novel tasks for the entity using it, without divulging data to corporate entities or outright competitors. Inference is cheap, and a modest gaming GPU or AI accelerator can easily support 99.9% of individual use cases offline, with the right supporting infrastructure (which is improving daily!).
All in all, an excellent post.
I often find people contest this with the non-sequitur of "No, it's not a bubble, there is real value there. We are building things with it". The fact there is real value in the technology does not contradict in any way that we are in a bubble. It may even be supporting evidence for it. Compare with the dot com bubble : nobody would tell you there was no value in the internet. But it was still a bubble. A massive hyper inflated bubble. And when it popped, it left large swathes of the industry devastated even while a residual set of companies were left to carry on and build the "real" eventual internet based reworking of the entire economy which took 10 - 15 years.
People would be well advised to have a look at this point in time at who survived the dot com bubble and why.
E.g. crypto displayed many, many characteristics of a bubble for a number of years, but the crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger, rather than popping in a fantastical way. (Not to say it still can’t, of course)
Then again, this bubble is different in that it has engulfed the entire US economy (including public companies, which is the scary part since the damage potential isn’t limited to private investors). If there’s even a 10% chance of it popping, that’s incredibly frightening.
I personally think a crash is more likely than not, but I think we should not assume that history will follow a particular pattern like the dot com bust. There are a variety of ways this can go and anyone who tells you they know how it’s all going to shake out is either guessing or trying to sell you something.
It is for sure an interesting time to be in the industry. We’ll be able to tell the next generation a lot of stories.
Bitcoin is now worth 2.3 trillion dollars. The price graph looks like a hockey stick. For tokens in a self contained ledger system.
You may be conflating hype and bubble.
But it had minor 15-20% corrections but kept rising for another year or two after that...
Bubbles are driving by irrational beliefs. And they won't be irrational if we could understand them.
AI is surely a bubble but it can go on for another 3-4 years when something possibly unrelated to AI pops it.
The crowd is always wrong on these things. Just like everyone "knew" we were going into a deep recession sometime in late 2022, early 2023. The crowd has an incredibly short memory too.
What it means is that people are really cautious about AI. That is not a self reinforcing, fear of missing out, explosive process bubble. That is a classic bull market climbing a wall of worry.
I'm reminded of the motto of the Royal Society: Nullius in verba.
That said, that job market is not as crazy as it was during .com, in fact right now most technologists are finding it more difficult to find work at the moment. Most of this AI hype started when the employment market started to slow down. Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts when crazy money enters the picture. So if, for example the fed really starts to cut rates and/or investment starts to really pick up and we have another boom period, the tail end of that seems to historically be when the bubbles pop.
Put another way, there is a good chance that the bubble will continue to inflate for a few years before it pops.
Meta has been offering 7 figure salaries for AI talent. This is a very different bubble from the .com bubble. The hiring frenzy in this limited to a very small group of people with unique skills/experience that few people posses. While at the same time thousands of other people are being let go in order to pay those big salaries to a few people (and in order to buy more GPUs). The C-suite has become obsessed with the idea that they're going to need much fewer engineers and they're hiring/firing like it.
Shipping where? What production? What kids? I've yet to see this. I see the tools everywhere, but not anything built with them. You'd think it would be getting yelled about from the mountaintops, but I'm still waiting.
A whole bunch of folks got into management thinking coding is beneath them, they are now wielding the power - let the code-monkeys do the typing. Then, turns out, coders are continuing to call the shots, and the management folks have coder-envy.
Now, with LLMs, coding is again not only within management's reach, but they think it is trivial, and it can be outsourced to the LLM code-monkeys, and management has regained power from the pesky coder-class.
So, you have management of all stripes "shipping" things, and dictating what coders should do - not realizing that they should stay in their lanes, and let coders decide for themselves what works best in their craft.
It's struck me as odd that managers of software engineers would seek to negate the field of software development almost completely. But maybe you're onto something.
Heck they did it with languages for the longest time. Here's twitter, we built it on Rails, everyone use Rails! Facebook, built on PHP, everyone use PHP! Feels weird that if these AI tools are doing all this work that no one is showing it off.
I'm extremely tired of bespoke solutions when OTS or already-known would work just fine.
Fair enough but he doesn't give much factual reasoning to support that. If you believe the brain is a biological computer and AI computing keeps advancing, at some point it will be able to do the same stuff or better, which is what most people think of as AGI.
I wrote about that for my uni entrance exam 43 years ago and it's always just seemed obvious common sense to me. I know Turing wrote about it before then but I never read that - it's just seems kind of obvious it'll happen.
Roger Penrose made an argument like we can know Gödel's theorem is true without being able to prove it but AI can't, but I think you can figure both are guessing in a similar pattern recognising kind of way.
As a specialist in one of the original industries Geoffrey Hinton predicted would be gone (Radiology) my job remains safe and even more in demand 9 years later.
Meanwhile, as a hobbyist programmer, I’m suddenly able to build multiple production tools solving real problems, simply because AI agents are doing the scut-work for me and optimising my time into code review and architectural design. For $200/month, it’s paying for itself many times over.
I call bullshit. Let's see some repos.
And this is why Matt Levine calls Sam Altman the greatest business negger of all time
The $560B for those who believe in AGI isn't about ROI using today's money-in/money-out formula; it's about power positioning for a post-capitalist transition.
Every major player knows that whoever controls the infrastructure once the threshold is crossed might control what comes after.
The "bubble" narrative assumes these actors are optimizing for quarterly returns rather than civilizational leverage.
I could also say, if you truly believe nuclear fusion is imminent we will have infinite free energy and all current economic metrics are meaningless. But there is no nuclear fusion bubble. Why not? Because people don't believe nuclear fusion is imminent. But for some reason they do believe AGI is imminent - despite there being no actual evidence of that. There is probably less understanding of what is needed to close the gap to true AGI than there is to close the gap to make nuclear fusion possible.
The only distinction here is what people are willing to "believe" based on pure conjecture - which is why I class it as a true bubble.
It’s a religion. Repent now, the AGI is coming.
That's more for less true for predicting any new financial trend.
If AI is making devs 20-30% more efficient, then you could invest in tech stocks if you think they can ship as much with lower overhead. The financial metrics look better if that's true.
I also don't understand the value of using AI to write stuff in loads of unfamiliar languages. I get why one might choose Rust vs. Golang vs. JavaScript depending on the mission, but I would think that those differences go away entirely when you're depending on an LLM to author something in those languages AND you aren't skilled enough in those languages to understand when something's suboptimal or not. This just feels like an express train to bankruptcy via technical debt.
I'm also having trouble with the notion of AI accelerating the creation of side projects. For me, actually writing the code (or figuring out how the language works) is part of the fun that I get from doing side projects. If I wanted to create something as quickly as possible, I'd just buy a SaaS subscription or physical version for what I want.
It's also insane to me that we're just not AT ALL considering how LLMs stunt the growth of our juniors. Spending hours banging my head against the wall on tiny bugs is how I got to where I am today. I'm going to guess that's the case for many of the people on HN as well. That learning process goes away entirely once an LLM goes into the mix. You can just ask it to fix whatever's broken, no understanding of the bug required. This is fine for seniors who know why things happen how they happen, but I can't imagine juniors making up this skill gap.
It's like learning a new language vs having your phone generate whatever in the target language. The end result is the same, but there's no way you can really learn that language with your phone doing the work, unless one assigns no value to learning that language in the first place.
Finally, I have trouble accepting the idea of giving up the keyboard once you become an "architect." I very much understand that us "architects" have less free time in the day to fire up the IDE (death by meetings, basically), but giving that up entirely feels somewhat career-limiting to me. Then again, this is a moot point if the market moves towards making software development an AI-only activity.
What's crazy to me is that most developers and architects sneered at low/no-code solutions because they created unmaintainable codebases that were too proprietary to make sense of, yet here we are lapping up code generated by "coder" LLMs and accepting that they "might" produce insecure code here and there. Insane.
I never cease to be shocked at how little tech people think of what creative people do and why they do it.
The world is full of creative people and some of them will make movies with AI. Those are indie film makers.