just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.
So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.
I feel bad for people who got left out in the cold this time, I know what they are going through.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.
"When interest rates return to normal levels, the ZIRP jobs will disappear." -- Wall Street analyst
Almost no one has seen a world where the price of money wasn't centrally planned, a committee of experts deciding it based on gut feel like they did in command economies like the Soviet union.
And then thousands of people's lives are disrupted as the interest rate swings wildly due purely to government action (corona lockdowns and fed zirp response), and it all somehow just ends up people talking about AI instead.
The true wrongdoers get absolutely no consequences, and we all just carry on like there's no problem. Often because our taxes go to paying hordes of academics and economists to produce layers and layers of sophisticated propaganda that of course this system is the best one.
Absurd and shitty world.
p.s.: I'm a big fan of yours on Twitter.
I wonder what happened in January 2025...
I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).
Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.
I have never once worked with a product manager who I could describe as “worth their weight in gold”.
Not saying they don’t exist, but they’re probably even rarer than you think.
These types all go to the same schools and do really well, interview the same, and value the prestige of working in big tech. So it's pretty easy to identify them and offer them a great career path and take them off the market.
Technical founders are way trickier to identify as they can be dropouts, interview poorly, not value the prestige etc.
Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.
Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.
Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."
Some managers read Dilbert and think it's intended as advice.
Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.
I’m worried about the shrinking number of opportunities for juniors.
I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.
And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?
At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!
And there's multiple confounding factors at play.
Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.
But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.
- Value creators (i.e. the ones historically carrying companies with the 80%/20% rule) generally are the ones cautious and/or fearful of AI. The ones that carried most of the company. Their output is measurable and definable so able to be automated.
- The people in the jobs you mention in your post conversely are usually the ones most excited about AI. The ones in meetings all day, in the corporate machine. By definition their job is already not well defined anyway - IMV this is harder to automate. They are often there for other reasons other than "productive output" - e.g. compliance, nepotism, stakeholder management, etc.
I’ve seen those guys it is painful to watch.
This had me thinking, how are they going to get "clout", by comparing AI spending?
First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).
Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.
whereas "AI" is intuitively an external force; it's much harder to assign blame to company leadership.
Because they don't have to do that. They could just operate at max efficiency all the time.
Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.
I do.
It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
I'm a person that would be hypothetically supportive of something like DOGE cuts, but I'd rather have people earning a living even with Soviet-style make work jobs than unemployed. I don't desire to live in a cutthroat "competitive" society where only "talent" can live a dignified life. I don't know if that's "wealth distribution" or socialism or whatever; I don't really care, nor make claim it's some airtight political philosophy.
I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
Doesnt really matter if AI actually works or not.
E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492
I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??
That would mean you have no ideas. It says nothing about the potential.
Shipping features faster != innovation or improvements to existing products
AI also helps immensely in creating those other inefficiencies.
> shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...Seriously, this shit can be solved with a regex...
The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best
Luckily software companies are not ball bearings factories.
In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".
We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.
However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.
Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.
This eventually changed. Companies do figure out how to use tech, it just takes a while.
It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.
I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.
And you DO see companies laying off people in large numbers fairly regularly.
That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".
It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".
Firstly, the capex is currently too high for all but the few.
This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.
Secondly, there's a corporate paralysis over AI.
I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.
I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.
The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.
Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.
Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.
The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.
But it's still possible that we didn't give people enough time yet.
Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.
The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.
AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.
So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.
Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.
I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.
That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.
Its the people that are constantly working, and too busy to be seen, producing output and keeping the lights on who don't have time for the "games" who AI is going for. Their jobs are easier to define since they are productive and do "something" - so its easy to market AI products for these use cases. After all these people are usually not the people in charge of the purse strings in most organisations for better or worse.
Management will be thrilled.
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
The whole idea of interns, is as training positions. They are supposed to be a net negative.
The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.
But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.
At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.
Startups are less enlightened than that about "interns".
Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.
It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.
Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.
Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.
There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.
Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.
A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?
I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.
This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.
An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that
AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.
At $70-80k the math for a new grad works out, but not at almost double that.
Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.
The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.
The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).
Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.
The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.
But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.
So... why exactly is the entry level screwed?
Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.
I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.
Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.
But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.
Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.
You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.
Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.
Many reasons why AI can't fill your shoes:
1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.
2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.
Junior and Interns can also use AI tools.
Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.
It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.
It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.
This feels like the ultimate pulling up the ladder after you type of move.
This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.
I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.
At the same time, there are some amazing recent grads and even interns who can't get jobs.
We've been hiring the younger group, and contracting for a few days a week with the more experienced people.
Combine that with AI, and you've got a powerful combination. That's our theory anyway.
It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.
1. Because, generally, they don't.
2. Because an LLM is not a person, it's a chatbot.
3. "Hire an intern" is that US thing when people work without getting real wages, right?
Grrr :-(
You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.
Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.
There have never been that many businesses able to hire novices for this reason.
The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.
A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.
AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...
Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.
If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?
> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.
Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.
I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.
If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.
The same thing will happen to Gen Z because of AI.
In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.
The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.
All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.
I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.
I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.
I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.
The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)
The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.
In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.
Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.
My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.
The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.
Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.
Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.
At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite
I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.
For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.
Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.
Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).
You have to always keep on moving just to stay in the same place.
Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.
Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?
People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
Its an apt comparison. The criticisms in the cnn article are already out date in many instances.
Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.
But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.
In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.
Not just this topic.
We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.
So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.
Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.
It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.
Had to look that up: https://en.wikipedia.org/wiki/Turkey_illusion
And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.
I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?
All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.
It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.
There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.
They're probably believing any combination of these concepts.
It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.
In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.
You can lose x% of the work force every year and keep unemployment stable...
A large portion of the population wants a lot more people to be able to not work and get entitlements...
It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.
Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.
It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?
If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.
I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...
History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.
Maybe it will be different this time though.
If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.
Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.
You forgot the born-wealthy.
I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)
I'm sure we are, but it doesn't look like an improvement for most people.
Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!
In 2010, I put together a list of alternatives here to address the rise of AI and Robotics and its effect on jobs: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
Sure there can be rich people who are radical enough to push for another phase of capitalism.
That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.
It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.
The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.
And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.
And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.
It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.
Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.
Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.
One of which was the occupation of being a computer!
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
If you don’t snatch up the smartest engineers before your competition does: you lose.
Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.
Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.
AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.
> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.
The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?
The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.
You're describing it as if that was the "normal?"
[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...
1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder
1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc
2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr
2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private
The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.
So the question is why do we event need to mention AI in our explanation of recent economic shifts?
What phenomena, exactly, require positing AI disruption?
"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.
Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:
"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."
Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)
Well, in order to meet the standard of the quote "wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years" we need more than just getting better. We need considerably better technology with a better cost structure to wipe out that many jobs. Saying we're starting on that task when the odds are no better than me becoming a billionaire within two years is what we used to call BS.
It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.
I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem
TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.
Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.
Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.
Do you have a specific country in mind, as the statement is not true for quite a lot of EU member states... and likely untrue for most of the European countries.
Sure you'll have destroyed the company, but at least you'll have avoided bureaucracy.
Like in the US you have a choice of which jurisdiction you want to start your company. Not all require a notary
Same as a washing machine / drier. Chuck the clothes in, press a button, done.
There are Roomba style lawnmowers for your grass cutting.
I'll grant you painting a house and plumbing a toilet aren't there yet!
It’s less work than it used to be, but remove the human who does all that and the dirty dishes and clothes will still pile up. It’s not like we have Rosie, from The Jetsons, handling all those things (yet). How long before the average person has robot servants at home? Until that day, we are effectively project managers for all the machines in our homes.
If you want to waste my time with an automated nonsense we should at least even the playing field.
This is feasible with today’s technology.
But on my Pixel now, on some phone trees it shows a UI with numbers and choices, and even predicts ahead for the other choices so you aren't forced to wait. Very handy!
Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train
I truly belive these types of paper don't deserve to be valued so much.
This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:
1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.
I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.
And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.
Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.
For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?
Cynically, I'm happy we have this AI generated code. It's gonna create so much garbage and they'll have to pay good senior engineers more money to clean it all up.
Which is my point, this is not about replacement, it's about reducing the need and increasing supply.
I fully understand your point and even agree with it to an extent. LLMs are just another layer of abstraction, like C is an abstraction for asm is an abstraction for binary is an abstraction for transistors... we all stand on the shoulders of giants. We write code to accomplish a task, not the other way around.
fucking lmao
It would have taken me a month to write the GPU code I needed in Blender, and I had everything working in a week.
And none of this was "vibed": I understand exactly what each line does.
Please for the love of god tell me this is a joke.
Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.
(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)
Exactly. These people are growth-seekers first, domain experts second.
Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.
[1] American news outlets that lean social-democratic
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.
Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?
Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)
This reminds me the "Walter White" meme "I am the documentation". When the CEO of a company that makes LLM says something like that, "I perk up and listen" (to quote the article).
When a doctor says "water in my village is bad quality, it gives diarrhea to 30% of the villagers", I don't need a fancy study from some university. The doctor "is the documentation". So if the Anthropic/ChatGPT/LLaMa/etc. (mixing companies and products, it's ok though) say that "so-and-so", they see the integrations, enhancements, compliments, companies ordering _more_ subscriptions, etc.
In my current company (high volume, low profit margin) they told us "go all in on AI". They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks. So, paying $20-$30-$40 per person, per month, and that thing improving the productivity/output of an FTE by 20%-30% is a massive win.
So yes, we keep the ones we got (because mass firings, ministry of 'labour', unions, bad marketing, etc.). Headcount will organically be reduced (retirements, getting a new job, etc.) combined with minimizing new hires, and boom! savings!!
If only it worked like this in reality. I used actual notion features literally this week and watched it fail so hard it was hilarious. It continually told me there was no documentation on X despite an entire page worth of documentation existing on it, had to be told this; at which point it apologised and regurgitated it.
Wow! What a time saver! I feel more productive already!
I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.
"Move fast and break things" - Zuckerberg
"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton
You're not going to sell me your SaaS when I can rent AIs to make faster cheaper IP that I actually own to my exact specifications.
If you can’t extrapolate on your own thesis you can’t be knowledgeable in the field.
Good example was a guy on here who was convinced every company would be ran by one person because of AI. You’d wake up in the morning and decide which products your AI came up with while you slept would be profitable. The obvious next question is “then why are you even involved?”
Robot run iron mine that sells iron ore to a robot run steel mill that sells steel plate to a robot run heavy truck manufacturer that sells heavy trucks to robot run iron mines, etc etc.
The material handling of heavy industry is already heavily automated, almost by definition. You just need to take out the last few people.
Yet when tech CEOs do the same thing, people tend to perk up."
Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.
For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.
If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.
No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.
The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021
I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite
I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”
The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.
If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.
I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.
I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?
Why is that the counter-narrative? Doesn't it seem more likely that it will contine to gradually improve, perhaps asymptotically, maybe be more specifically trained in the niches where it works well, and it will just become another tool that humans use?
Maybe that's a flop compared to the hype?
For any bet that involves purchasing bits of profits you you could be right and lose money because because the government generally won't allow the entire economy to implode.
By the time a bubble pops literally everyone knows they're in a bubble, knowing something is a bubble doesn't make it irrational to jump on the bandwagon.
The answer (as always) lies somewhere in the middle. Expert software developers who embrace the tech whole heartedly while understanding its' limitations are now in an absolute golden era of being able to do things they never could have dreamed of before. I have no doubt we will see the first unicorns made of "single pizza" size teams here shortly.
Sometimes my boss has asked me to do something that in the long run will cost the company dearly. Luckily for him, I am happy to push back, because I can understand what we're trying to achieve and help figure the best option for the company based on my experience, intuition and the data I have available.
There's so much more to working with a team than: "Here is a very specific task, please execute it exactly as the spec says". We want ideas, we want opinions, we want bursts of creative inspiration, we want pushback, we want people to share their experiences, their intuition, the vibe they get, etc.
We don't want AI agents that do exactly what we say; we want teams of people with different skill sets who understand the problem and can interpret task through the lens of their skill set and experience, because a single person doesn't have all the answers.
I think your ex-boss Mike will very soon find himself trapped in local minima of innovation, with only his own understanding of the world, and a sycophantic yes-man AI employee that will always do exactly as he says. The fact that AI mostly doesn't work is only part of the problem.
Think of it as an IQ test of how new technology is used
Let me give you an easier example of such a test
Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year
Do you think the world will become more peaceful or much more war?
If you think peaceful, you fail, of course more war, it's all about oppression
It's always about the few controlling the many
The "freedom" you think you feel on a daily basis is an illusion quickly faded
It flickers for a moment, then it either says
"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"
or
"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"
How would you use that information to invest in the stock market?
So it's index funds (as always) with me anyway.
First part of this statement is clearly false. People on the phone in a tech support company are very much necessary to generate revenue, people tending to field were very much necessary to extract the value of the fields. Draftsmen before CAD were absolutely necessary etc.
Yet technology replaced them, or is in the process of doing so.
So then, your statement simplifies to “if you want to be safe for replacement have a job that’s hard to replace” which isn’t very useful anymore.
Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.
Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind
People charging on their credit cards. Consumers are adding $2 billion in new debt every day.
"Total household debt increased by $167 billion to reach $18.20 trillion in the first quarter"
Rich people buying even fancier goods and services. You already see this in the auto industry. Why build a great $20,000 car for the masses when you can make the same revenue selling $80,000 cars to rich people (and at higher margins)? This doesn't work of course when you have a reasonably egalitarian society with reasonable wealth inequality. But the capitalists have figured out how to make 75% of us into willing slaves for the rest. A bonus of this is that a good portion of that 75% can be convinced to go into lifelong debt to "afford" those things they wish they could actually buy, further entrenching the servitude.
1. cure cancer
2. fix the economy
3. keep everybody happily employed.
And he's saying we can only pick two, or pick one. Except for the last one, that's not really an option.
This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models probably aren't capable of).
(3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.
These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.
I remember the pre-Web days of Usenet and BBS and no one thought those were trendy.
AI is far more akin to crypto.
I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.
Anyone paying attention should understand this fact by now.
There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.
“ Final Thought (as a CEO):
I wouldn’t force a full return unless data showed a clear business case. Culture, performance, and employee sentiment would all guide the decision. I’d rather lead with transparency, flexibility, and trust than mandates that could backfire.
Would you like a sample policy memo I’d send to employees in this scenario?”
A better, more reasonable CEO than the one I have. So I’m looking forward to AI taking that white collar job especially.
Even older people prefer to hire younger people.
But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.
It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”
People like this have banked their futures on AI not working out.
It's the AI hype squad that are banking their future on AI magically turning into AGI; because, you know, it surprised us once.
Or these guys pivot and go back to building CRUD apps. They’re either at the front of something revolutionary… or not… and they’ll go back to other lucrative big tech jobs.
It was never used in the sense of denigrating potential competitors in order to stay employed.
> People like this have banked their futures on AI not working out.
If "AI" succeeds, which is unlikely, what is your recommendation to journalists? Should they learn how to code? Should they become prostitutes for the 1%?
Perhaps the only option would be to make arrangements with the Mafia like dock workers to protect their jobs. At least it works: Dock workers have self confidence and do not constantly talk about replacing themselves. /s
As to my recommendation to what they do — I dunno man. I’m a software engineer. I don’t know what I am going to do yet. But I’m sure as shit not burying my head in the sand.
(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)
But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...
however there seems to be a big disconnect on this site and others
If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.
However many people don’t believe AGI is possible, thus will never consider those possibilities
I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.
A lot of the BS jobs are being killed off. Do some non-bs jobs get burn up in the fire along the way, yes. But it's only the beginning.
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
It ain't done yet.
As a user, I haven’t seen a huge impact yet on the tech I use. I’m curious what the coming years will bring, though.
Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,
That is the true definition of AGI.
LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though
1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.
2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.
3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.
All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.
The world is always ending anyway.
It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.
I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.
One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.
Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.
You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.
AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.
I consistently make 200k+ token queries of code and context and receive highly accurate results.
I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.
The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.
Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.
The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.
I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.
History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.
One can argue about the timeline and technology (maybe not LLM based), but it does seem that human-level AGI will be here relatively soon - next 10 or 20 years, perhaps, if not 2. When this does happen, history is unlikely to be a good predictor of what to expect... AGI may create new jobs as well as detstroy old ones, but what's different is that AGI will also be doing those new jobs! AGI isn't automating one industry, or creating a technology like computers that can help automate any industry - AGI is a technology that will replace the need for human workers in any capacity, starting with all jobs that can be conducted without a physical presence.