It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!
I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.
I never knew there was an entire subclass of people in my field who don't want to write code.
I want to write code.
How any of that comes down to an investment portfolio manager as writing "world class code" by LLMs is a mistery to me.
- inverse career growth structure and black hole effect
usually, an industry has a number of skills to hone, you start with simple ones, and as you go you may learn more to do harder, and earn more. the more you love, the more you learn, the better for you. this is evaporating.. and worse, the people who don't love it, get to run you over. you're now competing in the 'llm orchestration game' where the most mentally intense task is to chat with the cli and check its output.
llms may also be all encompassing, even if I adapt and accept that well software engineering is done for, i don't even foresee what i should learn now.. my brain thinking power is not that great, and the places where llms can't beat human are probably post-graduate intelligence and i can't compete much here either.
how i see it it's a middle layer collapse
Some people don't enjoy writing code and went into software development only because it's a well paid and a stable job. Now this trade is under the thread and they are happy to switch to prompting LLMs. I do like to code so use LLMs less then many my colleagues.
Though I don't expect to see many from this crowd in HM, instead I expect here to see entrepreneurs who need a product to sell and don't care if it is written by humans or by LLMs.
Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.
If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.
Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?
Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.
You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.
Regardless of AI this has been years in the making. “Learn to code” has been the standard grinder cryptobro advice for “follow the money” for a while, there’s a whole generation of people getting into the industry for financial reasons (which is not wrong, just a big cultural shift).
Most of the world doesn’t care about “good code.” They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?”
Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market.
Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.” In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is.
That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering.
So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry.
I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was.
In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.
It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:
Is all of that worth the harm I'm inflicting on others?
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
(I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction).
Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris.
That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia).
Just look at who is building, funding, and promoting these models! I can't think of a group of people less interested in helping millions of plebs lead higher quality lives if it costs them a penny to do it.
This artificial creativity will only go so far, because it's a simulated semblance of human creativity, as much as could be gathered from training data. If not continually refueled by new training data, it will run out sooner or later. And then it will get boring really quickly.
https://www.youtube.com/watch?v=_zfN9wnPvU0
Drives people insane:
https://www.youtube.com/watch?v=yftBiNu0ZNU
And LLM are economically and technologically unsustainable:
https://www.youtube.com/watch?v=t-8TDOFqkQA
These have already proven it will be unconstrained if AGI ever emerges.
https://www.youtube.com/watch?v=Xx4Tpsk_fnM
The LLM bubble will pass, as it is already losing money with every new user. =3
Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.
It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.
> This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.
The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].
> In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.
The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.
The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:
> the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.
It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.
In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.
Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.
I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.
This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.
More importantly, the web is now dominant for enterprise SaaS applications, which is a category of software that did not really exist before the web. And the web post–dot-com bubble spawned a lot of unicorns.
In short, there was an investment bubble. But the core tech was fine.
AI feels like one of those things where the tech is similarly transformational (even more so, actually). It’s another investment bubble predicated on the price of GPUs, which is mostly making Nvidia very rich right now.
Right now the model makers are getting most of the funding and then funneling non-trivial amounts to Nvidia (and their competitors). But actually the value creation is in applications using the models these companies create. And the innovation for that isn’t coming from the likes of Anthropic, OpenAI, Mistral, X.ai, etc. They are providing core technology, but they seem to be struggling to do productive things in terms of UX and use cases. Most of the interesting things in this space are coming from smaller companies figuring out how to use the models these companies produce. Models and GPUs are infrastructure, not end-user products.
And with the rise of open-source models, open algorithms, and exponentially dropping inference costs, the core infrastructure technology is not as much of a moat as it may seem to investors. OpenAI might be well funded, but their main UI (ChatGPT) is surprisingly limited and riddled with bugs. That doesn’t look like the polished work of a company that knows what they are doing. It’s all a bit hesitant and copycat. It’s never going to be a magic solution to everyone’s problems.
From where I’m sitting, there is clear untapped value in the enterprise space for AI to be used. And it’s going to take more than a half-assed chat UI to unlock that. It’s actually going to be a lot of work to build all of that. Coding tools are, so far, the most promising application of reasoning models. It’s easy to see how that could be useful in the context of ERP/manufacturing, CRM, traditional office applications, and the financial world.
Those each represent verticals with many established players trying to figure out how to use all this new stuff — and loads more startups eager to displace them. That’s where the money is going to be post-bubble. We’ve seen nothing yet. Just like after the dot-com bubble burst, all the money is going to be in new applications on top of the new infrastructure. It’s untapped revenue. And it’s not going to be about buying GPUs or offering benchmark-beating models. That’s where all the money is going currently. That’s why it is a bubble.
What a wild and speculative claim. Is there any source for this information?
Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.
I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.
FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.
Sounds like the extremely well-repeated mistake of treating everything like a nail because hammers are being hyped up this month.
First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.
Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.
I've taken some pleasure in having GitHub copilot review whitespace normalization PRs. It says it can't do it, but I hope I get my points anyway.
My layperson anecdote about LLM coding is that using Perplexity is the first time I've ever had the confidence (artificial, or not) to actually try to accomplish something novel with software/coding. Without judgments, the LLM patiently attempts to turn my meat-speak into code. It helps explain [very simple stuff I can assure you!] what its language requires for a hardware result to occur, without chastising you. [Raspberry Pi / Arduino e.g.]
LLMs have encouraged me to explore the inner workings of more technologies, software and not. I finally have the knowledgeable apprentice to help me with microcontroller implementations, albeit slowly and perhaps somewhat dangerously [1].
----
Having spent the majority of my professional life troubleshooting hardware problems, I often benefit from rubber ducky troubleshooting [0], going back to the basics when something complicated isn't working. LLMs have been very helpful in this roleplay (e.g. garage door openers, thermostat advanced configurations, pin-outs, washing machine not working, etc.).
[0] <https://en.wikipedia.org/wiki/Rubber_duck_debugging>
[1] "He knows just enough to be dangerous" —proverbial electricians
¢¢
Nothing that the LLM is outputting is useful in the hands of somebody who couldn't have done it themselves (at least, given a reasonable amount of time).
The most apt analogy is that of pilot and autopilot. Autopilot makes the job of the pilot more pleasant, but it doesn't even slightly obviate the need for the pilot, nor does it lower the bar for the people that you can train as pilots.
The benefits of LLM programming are mostly going to be subsumed by the operator, to make their lives easier. Very little is gonna go to their employer (despite all the pressure), and this is not due to some principal-agent breakdown; it's just intrinsic to the nature of this work.
I think the difficulty is exercising the judgement to know where that productive boundary sits. That's more difficult than it sounds because we're not use to adjudicating machine reasoning which can appear human-like ... So we tend to treat it like a human which is, of course, an error.
> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.
Wow, finance people certainly don't understand programming.
It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.
It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.
Ref: https://blog.codinghorror.com/has-joel-spolsky-jumped-the-sh...
Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.
Which is great! But it's not a +1 for AI, it's a -1 for them.
Doesn't everyone work that way?
It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.
The capabilities have grown dramatically in the last 6 months.
I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.
Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.
Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.
Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.
Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."
Surely that revenue is coming from people using the services to generate code? Right?
There are some ~1.5 million software developers in the US per BLS data, or ~4 million if using a broader definition Median salary is $120-140k. Let's say $120k to be conservative.
This puts total software developer salaries at $180 billion.
So, that puts $1 billion in Claude revenue in perspective; only about 0.5% of software developer salaries. Even if it only improved productivity 5%, it'd be paying for itself handily - which means we can't take the $1 billion in revenues to indicate that it's providing a big boost in productivity.
In time I’m sure it will, but it’s still early days, land grab time.
Yes. And all code is tech debt. Now generated faster than ever.
>The key is to not be one of the investors whose wealth is destroyed in the process of bringing on progress.
They are a VC group. Financial folks. They are working largely with other people's money. They simply need not hold the bag to be successful.
Of course they don't care if its a bubble or not, at the end of the day, they only have to make sure they aren't holding the bag when it all implodes.
But to say that they don't write any code at all, it's really stretched. Maybe I'm not good enough at AI-assisted and vibe coding, but code-quality always seems to drop down really hard the moment one steps a bit outside the common patterns.
Kinda like having a child "help" you cook basically.
But for the child you do it because they actually learn. llms do not learn in that sense.
This implies that you are an expert/seasoned programmer. And not everybody is an expert on this industry (especially the reviewing code part).
This is mission critical robotics software
It has become a standard tool, in the same way that most developers code with an IDE, most developers use agentic AI to start a task (if not to finish it).
Error bars, folks, use them.
Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.
EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.
>I do thoroughly audit all the code that AI writes, and often go through multiple iterations
Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3]. For me, one of the biggest skills for using AI to efficiently write code is a) chunking the task into increments that are both small enough for me to easily grok the AI-generated code and also aligned enough to the AI's training data for its output to be ~100% correct, b) correctly predicting ahead of time whether reviewing/correcting the output for each increment will take longer than just doing it myself, and c) ensuring that the overhead of a) and b) doesn't exceed just doing it myself.
[0] https://mattrickard.com/its-hard-to-read-code-than-write-it
[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[2] https://trishagee.com/presentations/reading_code/
[3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...
Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.
An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.
This hits the nail on the head, IMO. I haven't seen any of the replies address this yet, unless I missed one.
I don't even like AI per se, but many of the replies to this comment (and to this sentiment in general) are ridiculous. Ignoring the ones that are just insulting your work even though you admitted off the bat you're not an "advanced" programmer... There are obviously flaws with AI coding (maintainability, subtle bugs, skill atrophy, electricity usage, etc). But why do we all spring immediately to this gaslighting-esque "no, your personal experience is actually wrong, you imagined it all?" Come on guys, we should be better than that.
1. Most of these companies are AI companies & would want to say that to promote whatever tool they're building
2. Selection b/c YC is looking to fund companies embracing AI
3. Building a greenfield project with AI to the quality of what you need to be a YC-backed company isn't particularly "world-class"
I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.
Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.
Works perfectly, no bugs.
I am personally progressing to a point where I wonder if it even matters what the code looks like if it passes functional and unit tests. Do patterns matter if humans are not going to write and edit the code? Maybe sometimes. Maybe not other times.
Well, except developers have never had to worry about code as even in the pre-compiler days coders, a different job done by a different person, were responsible for producing the code. Development has always been about writing down what you want and letting someone or something else generate the code for you.
But the transition from human coders to AI coders happened like, what, 60-70 years ago? Not sure why this is considered newsworthy now.
Not sure it's a wild speculative claim. Claiming someone had achieved FTL travel would fall into that category. I'd call it more along the lines of exaggerated.
I'll make the assumption that what I do is "advanced" (not React todo apps: Rust, Golang, distributed systems, network protocols...) and if so then I think: it's pretty much accurate.
That said, this is only over the past few moths. For the first few years of LLM-dom I spent my time learning how they worked and thinking about the implications for understanding of how human thinking works. I didn't use them except to experiment. I thought my colleagues who were talking in 2022 about how they had ChatGPT write their tests were out of their tiny minds. I heard stories about how the LLM hallucinated API calls that didn't exist. Then I spent a couple of years in a place with no easy code and nobody in my sphere using LLMs. But then around six months ago I began working with people who were using LLMs (mostly Claude) to write quite advanced code so I did a "wait what??..." about-face and began trying to use it myself. What I found so far is that it's quite a bit better than I am at various unexpected kinds of tasks (finding bugs, analyzing large bodies of code then writing documentation on how it works, looking for security vulnerabilities in code) or at least it's much faster. I also found that there's a whole art to "LLM Whispering" -- how to talk to it to get it to do what you want. Much like with humans, but it doesn't try to cut corners nor use oddball tech that it wants on its resume.
Anyway, YMMV, but I'd say the statement is not entirely false, and surely will be entirely true within a few years.
there's an AI agent/bot someone wrote that has the prompt:
> Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.
Probably not, but it's a fun(ny) imagination game.
Interestingly, I feel the opposite. I feel threads on AI/LLMs on HN are more negative about it that positive. So much so that I've almost stopped bothering because the reactions feel way too knee jerk at this point.
I guess we could always go "meta" and ask an LLM to do a sentiment analysis.
IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?
---------------
This was quoted in the article and it says something really important very succinctly. Was the internet transformative? Absolutely. A lot of companies had solid ideas, spent big, and went tits up waiting for the money to roll in.
AI can be both "real deal" and "bubble" simultaneously.
I do think you need to define 'comprehension' in order to be certain. A statement fitting the form of "it doesn't comprehend, it just X" is incomplete, because it fails to explain why X is not a valid instance of comprehension.
But you don't need to understand to explore the ramifications which is what he's done here and it's an insightful & fairly even-handed take on it.
It does feel like AI chat here gets bogged down on "its not that great, its overhyped etc." without trying to actually engage properly with it. Even if it's crap if it eliminates 5-10% of companies labour cost that's a huge deal and the second order effects on economy and society will be profound. And from where i'm standing, doing that is pretty possible without ai even being that good.
As a founder of an AI company, I actually agreed with most of the article and found it to be very close to my mental model of the world. Turns out you might actually not need to understand what's causing the hype if you know that history rhymes...!
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
This seems like a factually correct sentence. Emphasis on "potential".
I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection.
The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much?
AI data centers in space? In five years? Really? No fiber connections? Does any sane person actually believe this? No. But if that is what keeps the billions flowing upwards then who am I to judge.
And to be fair, I've read that Google's timelines for this project extend far beyond a 5 year horizon. I think it's a rational research direction for them, since it gets people excited and historically many space-related innovations have been repurposed to benefit other industries. Best case scenario would be that research done in support of this data centers in space project leads to innovations that can be applied towards normal data centers.
See, AI is a field... and it's also a buzzword: once a technology passes out of fashion and becomes part of the fabric of computing, it is no longer called AI in the public imagination. GOFAI techniques, like rules engines and propositional-logic inference, were certainly considered AI in the 1970s and 1980s, and are still used, they're just no longer called that.
The statistical methods behind machine learning, transformers, and LLMs are certainly game changers for the field. Whether they will usher in a revolutionary new economy, or simply be accepted as sometimes-useful computation techniques as their limitations and the boundaries of their benefits become more widely known, remains to be seen but I think it will be closer to the latter than the former.
I get that laymen and the media do it, but imo this looks really bad for an investor.
Why? Would you expect an investor to understand what they're investing in?
I'm going to steal this for my arrr rspod conversations.
People seem to have forgotten about the dotcom bubble.
...AI is currently the subject of great enthusiasm. If that enthusiasm doesn’t produce a bubble conforming to the historical pattern, that will be a first.
You know how to? What language does it speak?
Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.
https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...
Maybe some day a completely different approach could actually make AI, but that's vapor at the moment. IF it happens, there will be something to talk about.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
FWIW, the only optimism I have is that humanity seemingly always finds a way to adapt and its, to me, our greatest superpower. But yeah, this feels like a big challenge this time
Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves. Again, I do not think they will achieve this but it is pretty gross when AGI is a stated goal but the result is just using it to replace labor and put billionaires in control.
Right now I am pretty anti-AI, but if these companies get what they want, I might find myself on the side of the machines.
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
[1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
(I think a reasonable argument can be made that P/E ratios today should be higher than the historical mean, or rather that they should have trended up over time, based on fundamental changes in how companies compensate their shareholders.)
With respect to the microphone test site I don't need it as my OS provides everything I need for this and I also don't trust your site (that's just by default for what you're asking to have access or my machine).
As for the speed test, OK? There are far better options that already exist and are fully open source.
Building things that are trivial, or already exist aren't exciting. It's great that you feel you went from MVP to "full feature". But IMO both of these are MVPs as they stand. They're not worth much to anyone but you, most likely.
The final thing I'll say is both of these examples have the vibe coded look. It's just like text, images and audio now: AI content is easy to pick out. I'd gather things will get better, but for now there's low likelihood I'm interacting with these in any meaningful way and zero chance I'm buying anything from sites like these.
I've built tools like this on the web in the past. They were never more than a weekends worth of work to begin with.
I am looking for exponential increases, with quality to back it up.
Before ChatGPT, I'd guess that the amounts of money poured in both of these things were about the same.
Fusion power on the other hand has to work as it doesn't make money until it does. You can't sell futures to people on a fusion technology today that you haven't yet built.
You will get a different result if you revolutionize some related area (like making an extremely capably superconductor), or if you open up some market that can't use the cheapest alternatives (like deep space asteroid mining). But neither of those options can go together with "oh, and we will achieve energy positive fusion" in a startup business plan.
Investment in fusion is huge and rising. ITER's total cost alone will be around $20b. And then there's Commonwealth Fusion, Helion, TAE and about a dozen others. Tens of billions are going into those efforts too.
See, every fab costs double what the previous generation did (current ones run roughly 20 gigadollars per factory). And you need to build a new fab every couple of years. But, if you can keep your order book full, you can make a profit on that fab- you can get good ROI on the investment and pay the money people back nicely. But you need to go to the markets to raise money for that next generation fab because it costs twice what your previous generation did and you didn't get that much free cash from your previous generation. And the money men wouldn't want to give it to you, of course. But thanks to Moore's Law you can pitch it as inevitable, if you don't borrow the money to build the new fab, then your competitors will. And so they would give you the money for the new fab because it says right on this paper that in another two years the transistors will double.
Right now, that "it's inevitable, our competitors will get there if we don't" argument works on VCs if you are pitching LLM's or LLM based things. And it doesn't work as well if you are pitching battery technology, fusion power, or other areas. And that's why the investments are going to AI.
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
I do think at this stage the best analogy is the offshore call centres. Yes, the excess in the market is likely because of misunderstanding about what LLMs can actually do and how close AGI is, the short term attraction is the labour cost savings. People may not think wages are high enough etc but the total cost for one hire to companies, particularly outside the US, is nothing to sniff at. And at current pricing of ai services, the maths would make complete sense for the bottom line.
I don't like it, because I ultimately err on the side of even limited but significant changes to people's livelihood will make the world a more hostile place (particularly in the current climate), but that's the society we live in
AI use cases do not appear to be of the type that unlock NEW capabilities.
The main use cases in AI are not about wealth creation but about saving existing wealth (largely through increased automation of human operators).
Either the bubble bursts and everyone's retirement funds take a hit, 2008 style,
Or a decent chunk of the workforce becomes unemployed and unemployable.
I'm starting to believe that AI coding optimism/pessimism maps to how much one actually cares about system longevity.
If a given developer just takes on board the demands for speed from the business and/or does not care about long-term maintainability (and I mean hey, some businesses foster that, and scaling quickly is important in many cases), then I can totally understand why they would embrace AI agents.
If you care about theory building, and domain driven design, and making a system comprehensive enough to extend in a year or two's time, then I can understand the resistance for the AI to let-it-rip. I admit to falling in this camp.
Am I off the mark here? I'd really like to hear from people who care about the long term who also let agents run relatively wild.
To give some context - I started developing a tactical RPG. I had an MVP prior to using Claude Code. I continued to work on the project, but lost motivation due to work burnout and prioritizing other hobbies.
I gave Claude Code a try to see whether it's any use. It helped more than I expected it to - it helped me produce something while dealing with burnout by building on the MVP I developed prior to AI assisted development.
The main issues I ran into were:
1) A lot of effort into reviewing the output. Main difference from peer review is that there's quicker feedback.
2)It throws out some absolutely wild solutions sometimes. It build on my existing architecture, so it was easier to catch issues. If I hadn't developed the architecture without AI assistance, things could have gone badly.
3)I only pay for the $20 Claude plan. Anything useful Claude produces for me requires it to consume a lot of tokens due to back-and-forth questions and asking Claude to dig into source file.
The most significant issue I ran into with Claude is when it suggested solutions I don't have the background to review. I don't know much about optimization, so I ran into issues with both rendering and the ECS (entity component system) library. Claude gave me recommendations, but I didn't know how to evaluate the code due to lacking that experience.
Claude was good for things I know how to do but don't want to do. It's been helpful when I want to work on something without being motivated enough to put 100% (or even 70%) into it.
If it's things I don't know how to do (like game optimization) it's harmful.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
We're too easily fooled by our mistaken models of the problem, it's difficulty, and what constitutes progress, so are perpetually fooled by the latest, greatest "ladder to the moon" effort.
Looking at your history it's something like "I tried them and they hallucinate" and, possibly, you've read an article that talks about inevitability of hallucinations. Correct? What's your reason for thinking that hallucination rate can't be lowered to or below the human rate ("Damn! What I was thinking about?").
In the case of AI coding, yes: AI does exceptionally well at search (something we have known for quite some time, and have a variety of ML solutions for).
Large codebases have search and understanding as top problems. Your ability to make horizontal changes degrades as teams scale. Most stability, performance, quality, etc., changes are are horizontal.
Ironically, I think it's possible that AI's effectiveness at broad search give software engineers additional effectiveness, by being their eyes. Yes, I still review every claude code PR I submit, and yes, I typically take longer to create a claude code PR than a manual one. But I can be more satisfied that the parallel async search agents and massive grep commands are searching more locations, more quickly, and more thoroughly than I would.
Yes, it probably is a bubble (overvalued). No, that doesn't mean it's going to go away. The market is simply overcorrecting as it determines how to price it. Which--net net, is a positive effect, as it encourages economic growth within a developing sector.
Bubble is also not the most important concern--it's rather a concern that the bubble is in the one industry that's not in the red. More important to worry about are other economic conditions outside of AI and tech, which are causing general instability and uncertainty rather than investor appetite. Market recalibrating on a developing industry is fine, as long as it's not your only export.
And before that
"Grace Hopper: [I started to work on the] Mark I, second of July 1944. There was no so such thing as a programmer at that point. We had a code book for the machine and that was all. It listed the codes and what they did, and we had to work out all the beginning of programmingand writing programs and all the rest of it."
"Hopper: I was a mathematical officer. We did coding, we ran the computer, we did everything. We were coders. I wrote [programs for] both Mark I and Mark II."
http://archive.computerhistory.org/resources/text/Oral_Histo...
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
How about when offices went digital? All the file runners, calculators, switchboard operators, secretaries, transcribers, etc. Where are they now? Probably not working good jobs in IT. Maybe you will find them bagging groceries past retirement age today.
Do we know this? Smaller more carefully curated training sets are proving to be valuable and gaining traction. It seems like the strategy of throwing huge amounts of data at LLMs is specific to companies that are attempting to dominate this space regardless of cost. It may turn out that more modest and better optimized methodologies will end up winning this race, much like WebVan flamed out taking huge amounts of investment money with them but now Instacart serves the same sector in a way that actually works robustly and profitably.
Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.
If it’s not transformational then this is a bubble and the market will right itself soon after, e.g buying data centers for cheap. LLMs will then exist as a useful but limited tool that becomes profitable with the lower capex.
If it is transformational then we don’t have the societal structure to responsibly incorporate such a shift.
The conservative guess is it won’t be transformational, that the current applications of the tech are useful but not in a way that justifies the capex, and that some version of agents and chat bots will continue to be built out in the future but with a focus on efficiency. Smaller models that require less power to train and run inference that are ubiquitous. Eventually many will run on device.
I guess there’s also another version of the future that’s quasi-transformational. Instead of any massive breakthrough there’s a successful govt coup or regulatory capture. Perfectly functioning normal stuff is then replaced with LLM assisted or augmented versions everywhere. This version is like the emergence of the automobile in the sense that the car fundamentally altered city planning, where and how people live, but often at the expense of public transportation that in hindsight may have sorely been missed.
That sounds like a total nightmare
This statement is redundant; the article screams with the author's ignorance.
I literally just today watched my entire team descend into "Release Hell" where an obscure bug in business logic already delivered to thousands of customers broke right as we were about to ship a release. Obscure bug, huge impact on the customer, as they actually ended up charging people more than they should have. The team-members, and yes, not leads, used AI to write that bug and then tried to prompt their way out of the bug. It turned into a giant game of whack-a-mole as other business logic had errors introduced that thankfully got caught by tests. Then it was discovered that they never understood the code, they could only maintain it with prompts.
Let that sink in. They don't understand what they're doing, they just massage the spec into prompts and when it appears to work and pass tests they call it good.
We looked at the prompts. They were insane. They actually just kept adding more specification to the end, but if you read through it all it had contradictory logic, which I would have hoped the AI would have pointed out, but nope. It was actually just easier for me and another senior to rewrite the logic as pseudo-code, cut the size down by literally 3/4, and eventually got it all working as expected.
So that's the future, girls and boys. People putting together code they don't understand with AI, and can only maintain with AI, and then not being able to fix with AI because they cannot prompt accurately enough because English sucks at being precise.
This right here is the pinpoint root cause of the speculative bubble. Although many people believe this to be true, it simply isn't.
"Yes, peasant associations are necessary, but they are going rather too far."
Is it a bubble? Maybe it’s just the landlords up to the old tricks again.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble
AI looks a lot more like the former. Some companies will fail, valuations will swing, but the underlying technology isn't going anywhere. In fact, many of the AI firms that will end up mattering are probably still undervalued because we're early in what will likely be another decade long technology expansion.
If you're managing a portfolio that needs quick returns and can't tolerate a correction, then sure, it probably feels like a bubble, because at some point people will take profits and the market will reset.
But if you're an entrepreneur or a long-term builder, that framing is almost irrelevant. This is where the next wave of value gets created. It's never smooth and it's never easy, but the long-term opportunity is enormous.
The debate is more on what happens from here and how does that bubble deflate. Gradually and controlled where weaker companies shut down and the strong thrive, or a massive implosion that wipes most everyone in the sector out in a hard reset.
I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.
That doesn't mean there will be a crash, though. Not all bubbles pop.
Yes.
...it is a bubble and we all know it.
(I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)
But one certainty is the crash will be spectacular.
Off--topic: how many get overpaid for absolute bullshit?
There is literally a section that begins, "What will be the useful life of AI assets?" In bold.
I am not sure why that is interesting. Nobody thinks of these chips as long term assets that they are investing in. Cloud providers have always amortized their computers over ~5 years. It would be very surprising if AI companies were doing much different -- maybe even a shorter time line.
Remember 2019-2021 when y’all were sure the fed would be dissolved and the dollar would crash and everyone would be poor if they didn’t have a bored ape and 80% bitcoin portfolio?
Relax.
AI is a tool. Just ride the wave. It’s gonna crash some people out. It’s entertaining watching them. You’re not being crashed out, right? Ride the wave dawg.
I don't know which "y'all" you're talking about because it's certainly not the HN crowd, who is famously anti-cryptocurrency in general. Perhaps you're thinking of the bros on Twitter back then.