And yes, I only talked about automation, but the same high-level issues apply to LLMs, but with different downsides: you need to check the LLM output which becomes a bigger topic, and then potentially your own skills stagnate as you rely on LLMs more and more.
Yes, it led to more work. What would take half a day could now be done in an hour. So we now had to produce 4x more.
I spent 4 years there automating left and right. Everyone silently hated me. One of the problems with my automation was that it allowed for more and more Q/A. And the more you check for quality issues, the more issues you'll find. Suddenly we needed to achieve 4x more, and that meant finding 4x more problems. The thing about automation is that it doesn't speed up debugging time. This leads to more stress.
One senior guy took me aside and said management would not reward me for my efforts, but will get the benefit of all my work.
He was right.
Eventually, I left because I automate things to make my life easier. If it's not making my life easier (or getting me more money), why should I do it?
Since then, whenever I get a new job, I test the waters. If the outcome is like that first job, I stop working on process improvements, and look for another job.
When it was done, there were no bugs. Not a single issue. They asked the embedded guys how they had accomplished it. They said "we didn't know bugs were allowed".
Many people have never authored or even been involved with a high quality piece of software, so they just don't know what it looks like, or why you'd want it.
You'd think that someone in the exec team would have some personal pride and ownership in the code and would want to flush out bugs and improve quality. But nah.
The reward for good work is more work. If they company wanted to pay you more, they would've already done so. If the company wanted automation, they would put that as a job description and pay accordingly (or more likely outsource it and get a shitty result for 10x the price - despite never willing to pay you anywhere close to that even if you were to give them the fully working solution).
AI actually has some ability to improve things. At least when I think about manufacturing and farming. When you produced at such a massive scale you could never individually inspect every potato, widget, or target every weed etc. You could produce WAAAY more but more bad products went out the door. But now you can inspect every individual thing. May not extend to every industry though.
Business is stupid. They value busy-ness over productivity.
I think individuals who get comfortable in their jobs don’t like automation arriving at their station because it upends the order of things just as they were feeling comfortable and stable. Being adaptable now is more important than ever.
> Products don't get better either, but that's more of a "shareholder value" problem than it is a specific technology problem.
This is broadly false. Your laptop is unquestionably better because it was constructed with the help of automated CNC machines and PCB assembly as opposed to workers manually populating PCBs.
Some companies can try to use automation to stay in place with lower headcount, but they’ll be left behind by competition that uses automation to move forward. Once that leap happens it becomes accepted as the new normal, so it never feels like automation is making changes.
This is a fundamentally flawed analogy, because the problems are inverted.
CNC and automated PCB assembly work well because creating a process to accurately create the items is hard, but validation that the work is correct is easy. Due to the mechanics of CNC, we can't manufacture something more precise than we can measure.
LLMs are inverted; it's incredibly easy to get them to output something, and hard to validate that the output is correct.
The analogy falls apart if you apply that same constraint to CNC and PCB machines. If they each had a 10% chance of creating a faulty product in a way that can only be detected by the purchaser of the final product, we would probably go back to hand-assembling them.
> Some companies can try to use automation to stay in place with lower headcount, but they’ll be left behind by competition that uses automation to move forward.
I suspect there will be a spectrum, as there historically has been. Some companies will use AI heavily and get crazy velocity, but have poor stability as usage uncovers bugs in a poorly understood codebase because AI wrote most of it. Others will use AI less heavily and ship fewer features, but have fewer severe bugs and be more able to fix them because of deep familiarity with the codebase.
I suspect stability wins for many use cases, but there are definitely spaces where being down for a full day every month isn't the end of the world.
I do actually plan on getting old, and as much as I would love to retire before I'm no longer adaptable, I'm not so sure my finances or my brain will comply.
>At home I save time because my dishwasher automates washing my dishes.
I don't think this fits my analogy, because you personally can go watch TV or read a book or exercise given the time that is saved by the dishwasher. At work, you must be at work doing something else, and the "something else" is seldom a real improvement. If I could automate my job and then go on a hike I'd be a lot more excited about it.
Look at all the other threads with people’s experiences. They aren’t unhappy with automation because they were comfortable. They are unhappy with automation because the reward for being more productive is higher expectations and no compensation.
People think the Luddite movement was smashing looms because they inherently hated technology. They smashed the looms because the factories were producing more and the result of that productivity was the workers becoming destitute.
If the machines and progress only bring about a worse life for individuals, those individuals are going to be against the machines
For instance, I had to rename a collection of files almost following a pattern. I know that there are apps that do this and normally I’d reach for the Perl-based rename script. But I do it so irregularly that I have to install it every time, figure out how I can do a dry run first, etc. Meanwhile, with the Raycast AI integration that also supports Finder, I did it in the 10-15 seconds that it took to type the prompt.
There are a lot of tasks that you do not do often enough to commit them fully to memory, but every time you do them it takes a lot of time. LLM-based automation really speeds up these tasks. Similar for refactors that an IDE or language server cannot do, some kinds of scripts etc.
On the other hand LLMs constantly mess up some algorithms and data structures, so I simply do not let LLMs touch certain code.
It’s all about getting a feeling for the right opportunities. As with any tool.
> On the other hand LLMs constantly mess up some algorithms and data structures, so I simply do not let LLMs touch certain code.
See, these two things seem at odds to me. I suppose it is, to a degree, knowledge that you can learn over time: that an LLM is suitable for renaming files but not for certain other tasks. But for me, I'd be really cautious about letting an AI rename a collection of files, to the point that the same restrictions apply as would apply to a script: I'd need to create the prompt, verify the output via a dry run or test run, modify as necessary, and ultimately let the AI loose and hope for the best.
Meanwhile, I probably have a script kicking around somewhere that will rename a batch of files, and I can modify it pretty quickly to match a new pattern, test it out, and be confident that it will do exactly what I expect it to do.
Is one of these paths faster than the other? I'm not sure; it's probably a wash. The AI would definitely be faster if I was confident I could trust it. But I'm not sure how I can cross that threshold in my mind and be confident that I can trust it.
There are definitely many things which when automated loses out on some edge cases. But most folks don't need artisanal soap.
I hear this so often these days and I quite do not understand this part. If I trust LLM do to "X" that means i have made a determination that LLM is top-notch with "X" (if I did not make this determination then letting LLMs do X would be lunacy) and henceforth I do not give a flying hoot to know "X" and if my "X" skills deteriorate it is same thing as when we got equipment to tend to our corn fields and my corn picking skills deteriorated. of course I am being facetious here but you get the point.
Without automation we would all be living in poverty.
> potentially your own skills stagnate as you rely on LLMs more and more.
There were some papers from microsoft that highlighted this point https://www.microsoft.com/en-us/research/wp-content/uploads/...
Now, if what you actually want is to be relatively more prosperous and have more status that's a game you can keep playing forever. But you really don't have to, to simply be better off than all people in the past with far less work.
All of my grandparents retired in their 50s with fat pensions and then lived into their late 80s without having ever stepped foot on a college campus.
Everyone I grew up with or met via work that is my age or younger has 1-3 more degrees than their parents and grandparents and are significantly worse off when it comes to standard life milestones like buying a home or ever having children.
We are not becoming relatively more prosperous as a people. We have more bread and circuses and less roofs over our heads on average
Too many people are trying to jump to the end when they don't even have their day to day managed or efficient today can tend to carry forward efficiency in a number of business workflows.
Checking the LLM output is required when it's not consistent, in many cases maintaining the benefit requires the human to know more on the subject than the LLM.
The folks at the top know how susceptible we are to being nerd-sniped and how readily we will build these things for them.
First things were made by hand, slowly - they were expensive and you could make a living making things.
Now those things are made in factories.
And they are 99% automated - like where software is going.
And whats left is to be a mindless factory worker doing repetitive things all day for a living wage.
But hey, you are so productive - now you make 100k items in a day. Must feel nice.
I'm seeing amount of changes needed to produce new features when coding with these AI tools constantly increasing, due to the absence of a proper foundation, and due to the willingness of people to accept it, with the idea that 'we can change it quickly'.
It has become acceptable to push those changes in a PR and present them as your own, filled with filler comments that are instant tech debt, because they just repeat the code.
And while I actually don't care who writes the code, I do expect the PR author to properly understand the code and most importantly, the impact on the codebase.
In my role as a mentor I now spend a lot of time looking at things written and wonder: Did the author write this, or did they AI? Because if the code is wrong, this question changes how the conversation goes.
It also impacts the kind of energy I'm willing to put in into educating the other person as to how things can be improved.
Forces the change in coding practice.
Which is a great idea until your superior asks why you're holding back the vibe coders and crippling their 100x productivity by rejecting their PRs instead of just going with the flow.
I am now able to single-handedly create webapp MVPs, one of which is getting traction. If anything actually takes-off, there will certainly be need for a real dev to take over. Also, my commits are not "vibe coded." I have read every single loc, and found so many issues that I am stunned that "vibe coding" is actually a thing. I do let the models run wild on prototypes though.
I think that I happen to be in some magical sweet spot as a person who knows the words, kept up with tech, but not the syntax of framework xyz.
I thought this sweet spot was very transient, and I am very happy that the tools appear to be reaching a plateau for now, so I still have at least another year of being useful.
Since agentic dev tools arrived, I am having the time of my life while gladly working 60hrs per week.
I realize that I am an outlier, but is anyone else in this same boat? If you have product ideas, is this not the best time ever to build? All of our ideas are being indirectly subsidized by billions of VC & FAANG dollars. That is pretty freaking cool.
Yep. I have a computer science background but have always been "the most technical product management/marketing guy in the room". Now I'm having lots of fun building a SaaS and a mobile app to my standards, plus turning out micro-projects like pwascore.com in a day or two.
It turns out that I love designing/architecting products, just not the grind-y coding bits. Because I create lots of tests, use code analysis tools, etc., I'm confident that I'm creating higher quality code than (for example) what most outsourced coders are creating without LLMs.
I am getting paid. I was able to resurrect a startup that failed 8 years ago. Back then, we tried to bootstrap with a very nice off-shore dev for the MVP, that's all we could afford. The iteration period was ~24hrs. That period is now minutes. You know who that helps? Every startup who didn't nail the idea from go, and requires iteration.
I can now meet with a user on Monday, show them the little feature they wanted by Tuesday... like a real full stack dev would have done 4 years ago. Is it ready for b2c scale on Tuesday? No, but that's not my goal.
I understand all the LLM-dev derision to some extent. But if you are not using the billions of non-gate-kept subsidies given to all of us right now, then either you are working on real computer science problems, or you are wasting what seems like the biggest opportunity of my lifetime.
This is the greatest time to build a startup ever. However, if you are stuck making that money for the boss, then yeah.. that's probably annoying. And yes, it is scary as hell that we are all going to be replaced, in possibly very short order. This is the time for every dev to learn to be a ... eeek... product dev, and not a just a software dev. I think Product Dev will become a thing.
[0] https://medium.com/@xcf.seetan/adventures-on-the-ai-coding-s...
It seems relatively obvious to me that if a society has work as its cultural core then no amount of productivity increase will get rid of work - it would destabilize the entire society before it could do so.
I just wrote this comment in another thread, but it fits here too:
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
You, as an employee, are just means to an end. "The company" doesn't care about you and you will not reap the benefits of whatever efficiency improvements the future brings.
The dream of automation was always to fix that. We did that, and more. We have long had the technology to provide for people. But we invent tons of meaningless unnecessary jobs and still cling to the "jobs" model because that's all we know. It's the same reason vaccuum cleaners didn't reduce the amount of cleaning work to be done. We never say "great, I can do less now because I have a thing to do it for me." That thing just enables me to fixate on the next thing "to be done." The next dollar to be gained.
A McDonalds robot should free the people of doing that kind of work. But instead those people become "unemployed" and one individual gets another yacht and creates a couple "marketing" jobs that don't actually provide any value in a holistic humanitarian sense.
It's true that some of the some of the capacity created by technology was consumed by increasing standards, the data do show a significant reduction in time spent on chores in spite of this.
1965-2011 hours spent on housework decreased 40%, while male housework doubled and female housework halved. The proportion of mothers working went up 90%, but somehow time spent with children went up 70% for men and women, again with improvements in gender equality.
Technology dramatically improved the efficiency of household chores. People invest some of that efficiency into further quality of living improvements or work, and still got to spend more time with their family.
https://www.pewresearch.org/social-trends/2013/03/14/chapter...
If you go further back in time the differences would be even more stark.
Yes, we can do better. Expectations on parents have gotten ridiculous, and much of this additional time is spent ferrying their children between 10 different extracurriculars. We spend a lot of time chasing more (thanks, dopamine) which could be spent enjoying what we have.
But the lack of understanding that technology and science have led to dramatic improvements in quality of life has led us to start turning our backs on it as a species, and we will pay a huge price for that.
The dramatic improvements to quality of life brought by science and tech are undoubtable, it was not my intent to question that. More just that we as people have a hard time with the concept of a goal state. It is about balance. Let's keep creating new and great things to improve our lives, but let's also acknowledge the futility and desperation of an infinite treadmill.
Since 1965? Except paying bills via the internet instead of in some bank office, what chore has become more efficient since 1965.
Regarding robot vacuum cleaners my take is that picking up stuff from the floor is what takes the most time anyway.
Those are cold comfort if compensation isn't enough or the job ruins your health or drives you into burnout, but I think their absence becomes important if you talk about popular UBI or "end of work" scenarios.
That's why I think even if we had some friendly tech company that did All The Jobs for free using automation and allowed everyone to live a comfortable life without even the need for an income, and even if we changed the culture such that this was totally fine, it would still be a dystopia, or at least risk very quickly drifting into one: Because while everyone could live a happy, fully consumption-oriented life, they'd have zero influence how to live that life: If the company does everything for you that is to be done, it also has all the knowledge and power to set the rules.
People don't have to need these things though. For a lot of people it's all just the means to the end of being able to live comfortably.
Because while everyone could live a happy, fully consumption-oriented life, they'd have zero influence how to live that life:
I don't think most people care much about that. But either way, they have the option to. I don't think humanity will slip into a vegetative dystopia because the default spirit of life is grow, expand, go, go, go, don't stop to think about the bigger picture. There is always curiosity and ambition in the gene pool. But society is jammed up with this model that is low-efficiency for everyone except the people that are financially in a position where they don't have to care (and I include myself at the lower end of that tier).
Which is great, and has unblocked so much productivity, but I do miss some of the grunt work. I feel like it helped spawn new ideas and gave you some time to think through implementation.
Yes, and this a meme I have in my mind of LLM engineers talking to each other and a balloon: "If we could just get the right regex done in a few seconds we'd win the entire global programming community."
But a response to the title: "_buzzword tech_ is making us work more" - it's rarely the tech making us work more, it's normally the behaviour and attitude of businesses trying to profit from the tech that makes life hard for everyone.
But such is that state of the UK that I had simply assumed the government had censored it. Remarkable how quickly expectations have shifted.
Personally, in the times I've had the most time off, I find that I am more productive, but that doesn't matter to any employer.
I guess you missed the part where people worked 7 days a week and 10 hours per day, and we didn't have ~20% of the population retired.
Unless we break social contracts, in 30 years ~40% of the population will be retired in large parts of The West and China.
If you're still working 40 hours a week, doing basically nothing but posting on HN, going to the gym, having lunch for hour+ breaks, for most of the work day - you might think nothing has changed.
But for 10-20% more of the population to not be working, there's a huge number of hours that aren't being worked.
It's just that most of the gains are going to one group of people.
Most of us will be in that group by that time...
Founders have been doing stupid signalling for ages to seem like they are more worthy of VC funding. A single anecdote in a podcast about a badly written Wired article based on a few anecdotes from hustle culture founders does not make something true.
Working 80 hour weeks for low pay and high expected upside has ALWAYS been SV software culture.
The individual leverage of an experienced software developer has never been higher.
Cheap raw materials does not affect the cost of everything else that must be done to prepare and deliver the product to the end user.
But that's all a bunch of hype that may never come to pass. Some people don't want to hear any hype. Hoping is too much to take if you've been let down too hard before. Or for the rich and you're poor. It's there though, if you want to dream. Dream big.
so far ime it's just 1000x more slop everywhere. fake emails at work, fake images on every app, fake text in every comment. and we are sooo productive because we can produce this crap faster than anyone can wade through it
"If", of course, but it's all but certain. The energy situation, today, is that we as a species depend on fossil fuels, and they are depleting. And we don't have a solution to replace them (renewables are not remotely replacing oil today, those who believe it say something like "we've done 1% of the job, it proves that we will reach 100%!").
AI is making us use more energy, and our use of energy is what is killing the world. Again, the consequences of abundant energy are global warming, mass extinction and political instability as the fossil fuels get less and less available.
AI is currently making energy more expensive. Shelter and these other commodities aren't made cheaper if population expands along with any capacity increases (like lanes on a highway). Lots of "ifs" in this statement that don't seem to match with observation of reality. The point of the discussion here is that AI in many ways is making workers less efficient.
Tell me again how this isn't pure hell and the cuck chair?
Is this really how professionals work on such a problem today?
The times I'd had a tune the responses, we'd gather bad/good examples, chuck it into a .csv/directory, then create an automated pipeline to give us a percentage of success rate for what we expect, then start tuning the prompt, parameters for inference and other things in an automated manner. As we discover more bad cases, add them to the testing pipeline.
Only if it was something that was very wrong would you reach for model re-training or fine-tuning, or when you know up front the model wouldn't be up for the exact task you have in mind.
We've kept the LLM constrained to just extracting values with context, and we show the values to end-users in a review UI that shows the source doc and allows them to navigate to exactly the place the doc where a given value was extracted. These are mostly numbers but occasionally the LLM needs to do a bit of reasoning to determine a value (e.g., is this X, Y or Z type of transaction where the exact words X, Y or X will not necessarily appear). Any calculations that can be performed deterministically are done in a later step using a very detailed, domain specific financial model.
This is not a chatbot or other crap shoehorned into the app. Users are very excited about this - it automates painful data entry and allows them to check the source - which they actually do, because they understand the cost of getting the numbers wrong.
#andRepeat
So, fraud? If you put a fixed price based on 3 hours, that's of course fine. If you lie about how long work takes you, that's fraud.
Unless your bids are what you bill not what it takes, and you would bill the same 3 hours if it took you 4. In which case it's a fixed price under a different name.
https://www.kalzumeus.com/2006/08/14/you-can-probably-stand-...
One could simply work less. Even if you're full in the office, you can just use that extra time to learn something new.
Initial build
database integration
accessibility
speed
Every moment I don't spend prompting, I'm falling behind.
The system insidiously guilts you for not leveraging it constantly. Allowing AI to sit there, just waiting, feels like a waste. It's subtle, but corrosive. Any downtime becomes a missed opportunity, and rest turns into inefficiency. Within this framework, leisure becomes a moral failure.
I don't feel that way.
What I feel is that in 2025 we're still the bottleneck because we make decisions. In 2026 we'll automate the QA part, and then we'll be able to fan out and fan in a lot of solutions at scale. Those who remove the bottlenecks in business beat everyone else. Is is why FAMGA and tech companies are the top of wall street. Biggest bang for the buck.
Americans are much more "productive" than we were 50 years ago but we're working as many hours or more. So obviously we're not the ones capturing the benefits from that added productivity.
Note also, compilers automated the process of machine instruction generation - quite a bit more reproducibly than 'prompt engineers' are able to control the output of their LLMs. If you really want the LLMs to generate high-quality programming code to feed to the compiler, the overnight build might have to come back.
Also, in many fields the processes can't be shut down just because the human needs to sleep, so you become a process caretaker, living your life around the process - generally this involves coordinating with other human beings, but nobody likes the night shift, and automating that makes a lot of sense. Eg, a rare earth refinery needs to run 24/7, etc.
Finally, I've known many grad students who excelled at gaming the 996 schedule - hour long breaks for lunch, extended afternoon discussions, tracking the boss's schedule so when they show up, everyone looks busy. It's a learned skill, even if overall this is kind of a ridiculous thing to do.
As jaded as that may be, I believe LLMs for many will become our bigger shovels.
I thought it was threat of being fired and left without means to pay for rent and food?
The ruling class will work you as much as possible – to the point of death – unless stopped via labor co-ordination, mass strikes, and force.
The 40-hour week w/ fair overtime, fair breaks, etc. could be enforced and expanded to include software engineering.
The Wired article seems to be mostly focused on situations where employees are compensated for working this lifestyle. Aside from that - they discuss how AI _founders_ are doing this to keep up with things. The former surprises me (a little - it wouldn't surprise me to see companies doing this _somewhere_ in the US pre-ChatGPT). The latter doesn't really surprise me at all. Typical founder hustle culture.
A better title would be AI is making startup founders hustle harder [and they are trying to normalize this workload across their (small but growing) companies). NOT "AI Is Making [All Of] Us Work More".
https://chatbotkit.com/reflections/why-ai-coding-agents-crea...
The tldr is that AI works like multiplier on both sides of the equation. Not only we will work more but we will get even more stressed because things will be moving at increasing speed - perpetually - until we hit some limit of course .
I'm not doubting you, btw... I've seen others here on HN also saying that they burn through money with AI, I guess I'm just missing something.
In fact, the geek in me absolutely wants to know what's going on, because you have probably found something that I would love to know about! :)
On the other hand, LLMs in hands of a misinformed team member who doesn't actually give any fucks whatsoever, is like a time bomb waiting to torpedo a project.
No one is 100% correct all the time. Leaning on an AI model because it glazes you 24/7 and does tell you that you are correct 100% of the time doesn’t mean it right about you, its just a seductive trap to fall into and the models are very good at telling you the next thing you want to hear
I have to believe this is satire.