just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.
So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.
I feel bad for people who got left out in the cold this time, I know what they are going through.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.
"When interest rates return to normal levels, the ZIRP jobs will disappear." -- Wall Street analyst
Almost no one has seen a world where the price of money wasn't centrally planned, a committee of experts deciding it based on gut feel like they did in command economies like the Soviet union.
And then thousands of people's lives are disrupted as the interest rate swings wildly due purely to government action (corona lockdowns and fed zirp response), and it all somehow just ends up people talking about AI instead.
The true wrongdoers get absolutely no consequences, and we all just carry on like there's no problem. Often because our taxes go to paying hordes of academics and economists to produce layers and layers of sophisticated propaganda that of course this system is the best one.
Absurd and shitty world.
p.s.: I'm a big fan of yours on Twitter.
I wonder what happened in January 2025...
I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).
Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.
Some managers read Dilbert and think it's intended as advice.
Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.
I’m worried about the shrinking number of opportunities for juniors.
And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?
At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!
And there's multiple confounding factors at play.
Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.
But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.
I’ve seen those guys it is painful to watch.
This had me thinking, how are they going to get "clout", by comparing AI spending?
First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).
Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.
Because they don't have to do that. They could just operate at max efficiency all the time.
Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.
I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
Doesnt really matter if AI actually works or not.
E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492
I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??
That would mean you have no ideas. It says nothing about the potential.
Shipping features faster != innovation or improvements to existing products
AI also helps immensely in creating those other inefficiencies.
> shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...Seriously, this shit can be solved with a regex...
The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best
Luckily software companies are not ball bearings factories.
In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".
We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.
However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.
Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.
This eventually changed. Companies do figure out how to use tech, it just takes a while.
It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.
I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.
And you DO see companies laying off people in large numbers fairly regularly.
That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".
It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".
Firstly, the capex is currently too high for all but the few.
This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.
Secondly, there's a corporate paralysis over AI.
I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.
I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.
The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.
Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.
Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.
The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.
But it's still possible that we didn't give people enough time yet.
Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.
The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.
AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.
So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.
Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.
I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.
Its the people that are constantly working, and too busy to be seen, producing output and keeping the lights on who don't have time for the "games" who AI is going for. Their jobs are easier to define since they are productive and do "something" - so its easy to market AI products for these use cases. After all these people are usually not the people in charge of the purse strings in most organisations for better or worse.
Management will be thrilled.
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
The whole idea of interns, is as training positions. They are supposed to be a net negative.
The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.
But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.
At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.
Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.
The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.
But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.
So... why exactly is the entry level screwed?
Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.
You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.
Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.
Many reasons why AI can't fill your shoes:
1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.
2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.
Junior and Interns can also use AI tools.
This feels like the ultimate pulling up the ladder after you type of move.
This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.
I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.
At the same time, there are some amazing recent grads and even interns who can't get jobs.
We've been hiring the younger group, and contracting for a few days a week with the more experienced people.
Combine that with AI, and you've got a powerful combination. That's our theory anyway.
It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.
1. Because, generally, they don't.
2. Because an LLM is not a person, it's a chatbot.
3. "Hire an intern" is that US thing when people work without getting real wages, right?
Grrr :-(
You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.
Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.
There have never been that many businesses able to hire novices for this reason.
The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.
A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.
AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...
Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.
If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?
> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.
Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.
I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.
If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.
The same thing will happen to Gen Z because of AI.
In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.
The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.
All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.
I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.
I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.
I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.
The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)
The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.
In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.
Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.
My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.
The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.
Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.
Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.
At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite
I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.
For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.
Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.
Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).
Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.
Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.
In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.
Not just this topic.
We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.
So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.
Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.
It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.
And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.
I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?
All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.
It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.
There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.
They're probably believing any combination of these concepts.
It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.
In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.
You can lose x% of the work force every year and keep unemployment stable...
A large portion of the population wants a lot more people to be able to not work and get entitlements...
It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.
Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.
It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?
Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!
Sure there can be rich people who are radical enough to push for another phase of capitalism.
That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.
It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.
The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.
And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.
One of which was the occupation of being a computer!
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
If you don’t snatch up the smartest engineers before your competition does: you lose.
Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.
Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.
AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.
> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.
The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?
The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.
You're describing it as if that was the "normal?"
[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...
1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder
1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc
2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr
2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private
"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.
Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:
"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."
Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)
It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.
I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem
TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.
Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.
Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.
Same as a washing machine / drier. Chuck the clothes in, press a button, done.
There are Roomba style lawnmowers for your grass cutting.
I'll grant you painting a house and plumbing a toilet aren't there yet!
If you want to waste my time with an automated nonsense we should at least even the playing field.
This is feasible with today’s technology.
Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train
I truly belive these types of paper don't deserve to be valued so much.
This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:
1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.
I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.
And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.
Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.
For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?
Cynically, I'm happy we have this AI generated code. It's gonna create so much garbage and they'll have to pay good senior engineers more money to clean it all up.
fucking lmao
Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.
(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)
Exactly. These people are growth-seekers first, domain experts second.
Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.
[1] American news outlets that lean social-democratic
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.
Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?
Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)
This reminds me the "Walter White" meme "I am the documentation". When the CEO of a company that makes LLM says something like that, "I perk up and listen" (to quote the article).
When a doctor says "water in my village is bad quality, it gives diarrhea to 30% of the villagers", I don't need a fancy study from some university. The doctor "is the documentation". So if the Anthropic/ChatGPT/LLaMa/etc. (mixing companies and products, it's ok though) say that "so-and-so", they see the integrations, enhancements, compliments, companies ordering _more_ subscriptions, etc.
In my current company (high volume, low profit margin) they told us "go all in on AI". They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks. So, paying $20-$30-$40 per person, per month, and that thing improving the productivity/output of an FTE by 20%-30% is a massive win.
So yes, we keep the ones we got (because mass firings, ministry of 'labour', unions, bad marketing, etc.). Headcount will organically be reduced (retirements, getting a new job, etc.) combined with minimizing new hires, and boom! savings!!
I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.
"Move fast and break things" - Zuckerberg
"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton
Yet when tech CEOs do the same thing, people tend to perk up."
Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.
For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.
If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.
No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.
The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021
I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite
I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”
The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.
If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.
I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.
I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?
Think of it as an IQ test of how new technology is used
Let me give you an easier example of such a test
Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year
Do you think the world will become more peaceful or much more war?
If you think peaceful, you fail, of course more war, it's all about oppression
It's always about the few controlling the many
The "freedom" you think you feel on a daily basis is an illusion quickly faded
It flickers for a moment, then it either says
"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"
or
"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"
How would you use that information to invest in the stock market?
Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.
Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind
1. cure cancer
2. fix the economy
3. keep everybody happily employed.
And he's saying we can only pick two, or pick one. Except for the last one, that's not really an option.
I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.
Anyone paying attention should understand this fact by now.
There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.
Even older people prefer to hire younger people.
But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.
It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”
People like this have banked their futures on AI not working out.
(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)
however there seems to be a big disconnect on this site and others
If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.
However many people don’t believe AGI is possible, thus will never consider those possibilities
I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.
I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.
One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.
Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.
You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.
AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.
I consistently make 200k+ token queries of code and context and receive highly accurate results.
I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.
The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.
Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.
The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.
I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.
History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.