The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.
Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.
What remains is documentation and automation.
But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.
AI is following the same pattern.
What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.
The West has seen this before, especially in the case of General Electric.
GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.
The same mindset is visible today.
The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.
Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.
You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.
I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.
Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.
When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.
I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.
This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)
The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.
And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.
Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.
We've seen it happen to small electronics and general goods.
We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.
---
And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)
E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.
SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.
---
And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.
Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.
This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.
Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.
It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...
If there were known "make more software, make more money" opportunities available, they would have already done them.
Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.
The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.
A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.
Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.
Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.
Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.
More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.
A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.
"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.
The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.
It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?
The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.
I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.
Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.
Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.
In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.
They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just wasted time and grimey interactions.
Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.
At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.
It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.
Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.
It works fine with machinery and materials, but not with humans.
When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.
Humans also break in different ways:
* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought
* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually
This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.
Next is the knowledge, more you outsource, more you lose it
Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.
Vesting schedules, conditional grants, contractual equity ownership requirements
Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."
It is a very difficult mindset to argue against.
It’s called The Beer Game[1].
One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.
Short-termism is the death of companies.
The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.
No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.
This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.
Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.
This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. "Those who care don't know and those who know don't care".
The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.
This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.
This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.
It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.
It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.
This race over the few decades has hollowed out most Western companies.
Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.
Capitalism is the worst form of economic system, apart from all the rest.
I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.
This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.
People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.
in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though
I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.
The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.
In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.
But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.
You just described Lucent.
This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.
In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.
Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.
Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.
The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.
What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.
China: We need to build this useful thing and then later let’s try to make profits, too.
And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.
I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it does do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).
I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.
My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).
I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.
Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.
Helps me keep sane tbh. And keeps the edge sharp.
I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.
As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.
I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.
It doesn't seem to me a thing that I could suddenly forget?
Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.
My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.
You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.
As in how could I possibly forget what loops, conditionals or functions are?
I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.
But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.
Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.
Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.
If you don’t care enough to write it, why should I care enough to read it?
Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.
The only purpose of the written word is to be read.
That’s the problem.
What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.
It wasn’t one bottleneck. It was all of them.
Not the nuclear material. The pattern.
Money was never the constraint. Knowledge was.
...
#2 rule of slop: even posts critical of pervasive AI usage and how it's ruining the world can be AI-generated
The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.
Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.
What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.
Does that really make you senior or just politically versed?
The pattern shows up the most whenever failing software pokes holes in perception.
There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.
And then there's the rest.
Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.
Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.
Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.
You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.
This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.
That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??
I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.
You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.
That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.
There’s plenty of people in this world who are expert programmers without following any traditional path.
“Oh yeah, like who”, you say.
Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.
AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.
This is not fun. It has no flow.
I can do that too. Most programmers can.
That's because it requires less skill! Critiquing something is always easier than doing it.
I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.
That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.
They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...
No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.
Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.
I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"
btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.
I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.
Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.
But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.
My current pet peave is using period instead of comma, as in:
> My people lived the other side of this equation. Not the factory floor. The receiving end.
Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.
Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.
Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.
The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.
The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.
Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.
LLM slop reads more like slide bullet points extrapolated to prose-length text
Find some pre 2020 that are, and you'd have a point.
Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.
People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.
>In defense, the substitute was the peace dividend. In software, it’s AI.
Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?
Also over here, east of 15°E we were fired all the same.
I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.
I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.
They really, really do not want to spend money. Especially not on Americans and their health insurance.
It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.
> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.
“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.
They did not properly prepare and as a result lost 20% of its territory in days.
Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.
Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.
The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.
It is about humans, not AI.
Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.
Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.
They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.
Why would we listen to anything related to right or wrong from you then if you don't care?
With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).
I'm going to steal that one and add it to Stross': "Efficiency is the reciprocal of resilience."
The other that really resonated was something that I read before along the lines of… we think that once humanity learns something, that knowledge stays and we build on it. But it’s not true, knowledge is lost all the time. We need to actively work to keep knowledge alive
That’s why libraries and the internet archive are so important. Wikipedia, too
And the premise makes no sense anyway. The only risk of forgetting how to make shells is when other countries are making shells more efficiently. Non-western countries are not going to reject AI-coding, nor are they going to make software more efficiently by hand.
They may keep taking the longer and harder route of a mixture of AI and hand coding.
If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.
COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.
As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compilerEven in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.
With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.
It feels a lot like someone has a cursory understanding of American politics, and thinks the US is somehow representative. It's not, it is an outlier by every statistical measure. If you want to understand the world, you need to start by forgetting everything you know about the US.
Well then train them, instead of selecting 0.18% of applicants and calling it a day.
It's not some innate, immutable property - people can be taught even in adulthood.
Also it's not like they'll work for a year and switch jobs - not in the current market.
"Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t."
Of course, we are all wondering if AI will be good enough in 5 to 10 years such that you don't have to look at the code (at all). If so, then very few programmers will be needed it seems. If not, its possible that roughly the same number will be needed.
It seems oddly binary to me since as soon as you need to understand anything about the code, you have to effectively onboard yourself to a foreign codebase and develop the needed context.
Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.
Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.
Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.
And just like offshoring dev work, we may see the rebound effect when there's all kinds of poorly written LLM outputs in production and companies are running around trying to re-hire high quality devs to fix all these fires that they themselves started.
- Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.
In reality if they'd used AI, the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.
The article seems AI written with punchy sentences and mixed up logic.
What’s really happening is that we are all forgetting how to think
This kind of forgetting is normal. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.
But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they could get them if they really needed them, but it's gonna be costly.
I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.
This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".
Just as shift in power and the rise and fall of nations is normal.
Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:
1. "This specific case should not have taken certain people by surprise."
2. "This is a manifestation of a broader phenomenon."
3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]
Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.
I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.
But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.
"Civilization advances by extending the number of important operations which we can perform without thinking about them"
It remains to be seen whether this implies some kind of constraint on human progress. I doubt it.
You mean the world?
Deepseek was being glazed here, Im sure chinese programmers use it like CC
Even "First/Third world" has been fraying at the edges for decades since it was originally about political alignment.
I see this as a sign of increase in productivity, the important software will still have a human centered development team, but we don't need another dev team on say, tinder for dogs.
To the extent that AI is analogous to automation in manufacturing, and "writing code" to working on an assembly line, it's hard to argue the West is any thing other than a global leader in "software tool & die", so to speak.
The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.
The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.
I thought I'd go back for a Masters/PhD but then Trump mercurially defunded lots of STEM grad programs. Ngl, I found myself stuck. Zero job openings, zero PhD program openings. It's all so frustrating.
I'm far removed from the conflict in Ukraine, but from the reporting it seems like they are making extremely good use of well understood, inexpensive technologies like drones with mundane munitions.
I'm sure Stinger missiles have their place in the battlefield there, but a $120K stinger doesn't seem like a very good countermeasure against a few thousand dollar drone.
So, counterpoint: We also need to understand how to embrace the changing face of software.
Probably we are going to be fine with AI abstraction too. People will use it, stuck with problems, dig deeper, learn, improve, same as we had with frameworks and its source code.
- Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.
In reality if they'd used AI the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.
The article seems AI written with punch sentances and mixed up logic.
This article passes blame to AI for developers not learning because they are not being actively hired. You do not need to be hired to learn something. You need to learn something in order to be hired.
I also remember, that EE for a while stopped using the term "jellybean parts". Turns out that most jellybeans are produced in Asia.
Software developers have been learning what they needed to know to do the job the whole time. That’s pretty much the job description.
What you need to know has changed a lot recently. Like always.
> The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
That’s certainly not true. I’d take a hard look at my hiring process if it was performing this inefficiently.
>
> In defense, the substitute was the peace dividend. In software, it’s AI.
The rise of coding bootcamps destroyed the historic knowledge and expertise of professional software developers. Waves and waves of people joined the tech workforce, without taking the years of experience required to learn how programming, and professional software development, should work. The result was a lot of really bad code, and a lot of reeeeeally bad product decisions.
Since 2018 I haven't met anyone who has read an entire technical manual on a framework, library, or tool that they use every day. By 2020 I was meeting engineering managers who said they wouldn't let engineers use a technology if they couldn't find StackOverflow snippets for it. I still meet "Senior" engineers who don't understand the most basic professional methods, like how Scrum, Agile, or Kanban actually work, and why you shouldn't just make things up as you go. Hell, the entire industry developed a collective psychosis preventing them from understanding the word "DevOps", because everyone switched entirely to learning by reading false blog posts written by clueless amateurs and upvoted in an echo chamber. If you never learn properly, and repeat misconceptions, you won't do good work.
We neeed a professional software development license, the same way the Trades have licensed plumbers and electricians and framers. We need people to apprentice under a master engineer, so they are guided by people who know what to do and what not to do. And we need formal tests to ensure businesses don't hire clueless people who passed a two week course to write critical software. Of course nobody wants to do this, and that's why it's so necessary.
We’ll see, but right now I now see developers 24/7 hooked onto their agents and in the future we will experience a de-skilling problem which clean code, best practices, security and avoiding NIH syndrome will be all flushed down the toilet.
I love this articles who all the coders read but none of the management.
If possible, be a mercenary and put a high number on your expertise, so we can solve this management blind spot faster.
If you can't, let your life/work's passion be "not starving to death", and try to change it politics-side.
The history of technology is the replacement of manual processes with automated ones.
Consider a very basic process: checkout of a restaurant.
Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.
This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.
Did we “forget” how to do checkout? No. We purposely changed it.
But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.
Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.
With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.
We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).
We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).
Will we “forget” how to “build” code?
No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.
Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.
But the productivity gains we get will be so great on average that no one will go back to doing things the old way.
There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.
I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.
CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.
With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.
> Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.
For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?
LLMs are a magnificent tool if you use them correctly. They enable deep work like nothing before.
The problem is the education system focused on passivity (obeyance), memorization, and standardized testing. And worst of all, aiming for the lowest common denominator. So most people are mentally lazy and go for the easy win, almost cheating. You get school and interview cheating and vivecoders.
But it's not the only way to use LLMs.
Similarly, in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.
Even if you are the absolute unicorn who gets paid to "code much harder problems" and "learning", the rest of the industry exists to deliver actual products and services.
So unless you nurture some type of https://xkcd.com/208/ fantasy, this is not just about you. The industry as a whole needs to find a way to work with LLMs without automating programming away entirely, and the industry as a whole needs to find a way to ensure that newcomers are able to be productive even if code-generation tools are taken away from them.
I'm not saying you're personally doing anything wrong, but there's a parallel here, when smart and curious people read articles about history and literature and art and science, rather than engaging directly with the real thing.
Or then the next level down, where creating amazing work in all of those domains depends on enough "slack" in the system for people to pursue deep work that will not be immediately profitable.
Do you see where I'm going with that? We (and I'm very much including myself: here I am on HN, instead of reading something more substantial) skim the (Wikipedia) surface, instead of diving truly deep. AIs (right now) are the ultimate surface-skimmers, and our fascination with and growing reliance on them reflects something in our current surface-skimming cultural mindset.
> Salesforce said it won’t hire more software engineers in 2025.
Some headline somewhere reported this, but Salesforce plenty of engineers (in the US at least) in 2025. One of them is a junior engineer on one of my scrum teams.
The west is not merely forgetting to code. It is creating systems that can code. They aren't standing still. They are progressing to a higher level of production.
Shells are not needed once they are not in needed. Code does not: customer need is always there.
Before forgotting how to code, The West will first get round up by their own Monsanto, voluntarily.
It's minor but this is just wrong. If you're going to hire 4 candidates, there could be 2,253 perfectly qualified candidates even if only 0.18% get hired. The conversion rate is meaningless; it just tells us how many jobs were on offer. There is no way that the skills this fellow wanted were so rare and difficult that only 1/500 candidates could possibly handle the job. Humans even in the 1/20 mark are pretty competent if you're willing to train them and legitimate geniuses crop up at around 1/200.
It doesn’t seem much like defense industry problems.
I see a talent pipeline collapse in next 5 years. "Software engineering is over coding is a solved problem" as being chanted by semi literate media and the AI grifter's marketing departments would further scare away the allocation of human capital to software engineering easily commanding 3x rise in salaries due to resource shortage.
So there was this: "I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now."
The filter flashed the warning on the telltale signs and I stopped reading. Now I've got the puzzle I don't want to do. Did someone trying to argue against "AI assisted" coding use an LLM to author that argument?
But this is HN, I can also just move on to the next story.
If the author sincerely believes the thesis that AI makes you vulnerable / dumb, they are either incredibly hypocritical. But more likely, they're just cynical and trying to get traffic to their website. And you're not getting back the time you spent reading this and arguing with it.
It’s a 85/15 rule. These big companies hire hundreds, possibly thousands, of developers but most of them cannot code. Some of them struggle to write emails. About 15% of those people provide 85% of the value.
Here is where it all went wrong. The goal of software, the only goal, is automation. That means eliminating human labor. The goal of these big companies is hiring, which is mostly the opposite of eliminating labor. That conflict results in people who cannot do the jobs they are hired to perform and whose goals are to retain employment in preference to automating anything.
Worse still is that you can’t talk about if 85% of the people doing that work find this very subject completely hostile.
It is difficult to get a man to understand something, when his salary depends on his not understanding it. Upton Sinclair.
Good, knowledgable employees are not fungible. The in-house culture that built the engineering takes an entire generation to build.
The winner-take-all MBA class of the 1980s to the 2000s and the congressional leadership developed during this era are squarely at fault and their policies need to be replaced.
Did I forget everyone's phone numbers when cell phones came out? Yes.
But this is different. Coding is my passion. I was doing it before I got paid to do it and I'll be doing it after they no longer pay people to do it anymore.
Now? Seems like code quality is outdated and uninteresting all of a sudden. Everything is about agentic coding, harnesses, paying hundreds of dollars to Anthropic to let their LLM do the coding for you or perhaps using a 128 GB Mac to run a local model. Do you know your code base? Doesn't matter, if there are any bugs in the future Claude will fix them! Tokenmaxxing is the new paradigm, who cares about the end result as long as it's runs for now and passes all (AI written) tests!
But don't suggest these people shouldn't get $100k+ salaries, after all, they still "software engineers" in their minds, they're running the agent orchestration harness in the terminal after all, not everyone anywhere in the world could do that! They're special and deserve to be well compensated for their hard vibe coding work!
This industry is rotting from the inside.
I got my highest-paying numerical programming contract (in the US) just because I knew (from high school math table experience) how to use LUTs to calculate a lot of useful stuff i.e quarter squares.
Modernization is great and all. However, it's disappointing to know lots of new programmers are oblivious of the fundamentals.
It's kind of insane how much knowledge a human being needs to have to build certain technologies and it's taken for granted.
AI might make the knowledge easier to acquire but it's still a lot of knowledge that people have to internalize.
As it was said - the future is here, it just distributed non-uniformly, so somebody is still and will be for some time sailing, manufacturing things and writing code.
Coding is different though, coding doesn't have a cost barrier, it has a ability barrier. I think we will loose a lot of people who never were passionate about programming and perhaps go back to a happy equilibrium. AI is only production ready if you have someone who understands software development. AI will improve speed to market if you have the right team, it doesn't remove the need for some to learn to code. You will of course end up with startups using exclusively AI but they will be those who end up with major security breaches or simply cannot scale as the AI goes in the wrong direction for the future. Tbh that's probably a positive as it weeds out the start ups that are focused on buzzwords for funding and not product.
Why is speed-to-market such an important metric? I do not understand the need to mimic the largest players in the industry, nor do I see any particularly profound long term benefits to first mover advantage.
Anecdotally, what I’m seeing right now is the opposite. People who don’t care about programming are joining, while those who do care are getting tired of the bullshit and leaving. The good programmers are the ones leaving, the hacks are extremely happy to use LLMs.
When shit hits the fan, there won’t be many people left to clean it.
Because it seems to me like there's a lot of coding-adjacent things they still need to be able to do even if they never look at a line of code.
AI has been an effective coding tool for, what, 2 years at most? We've collectively forgotten all of our skills in those 2 years? Really?
https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...
But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.
Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).
It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.
We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?
We had better make sure we write down and preserve the recorded data though :)
There will always be specialists who can really debug stuff. Mechanics, etc. Time moves on, and we need to move with it.
I’m amazed at this “end-of-world” crap. People use AI to write this shit, to make it even crazier.
[...]
Money was never the constraint. Knowledge was.
[...]
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either."
Well said!
But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.
The hypocrisy is mind blowing
What America did with developing Shale Oil to become viable, so quickly is one example.
I had idea what might be the difference between the groups. I think for the latter group the code is important part of the goal. They see software as rather ends than means. Not entirely of course.
And the first group considers artifacts that the software produces to be the goal. So as long as AI written software is capable of producing valuable artifact they are willing and eager to go with it. And AI does that.
If the result of my code is finetuning of a neural network, I don't really care how it happened. I can benchmark it afterwards and know if the code that AI made for this purpose was good or not. I can inspect the code, investigate it, pinpoint ideas I don't like, suggest some ideas to try that I believe could give better results. I can restart, or try doing same thing few times in parallel trying different harnesses and models. All in service of the result, that is not code.
If you have a program that needs to do something and are willing to try AI to write it, think foremost about how you can rephrase the problem so that the output of AI written program becomes an artifact that can be independently verified, how to turn desired behavior into an artifact to evaluate.
This is weird, but does seem a common result.
-> AI generates a ton of code fast, but then the human takes a long time to review. Every time the prompt changes. The AI takes a few minutes to generate code that the human will take hour to review.
The reviewing is taking longer than if human just did the code. So why is it so difficult to go back to coding instead of prompting.?
The outsourcing was shedding more of the trivial jobs, while trying to keep key positions at home, but increasingly, it also started to lose the key positions too. It's possible that AI can make it so that the key positions will be harder to justify to outsource... but, who knows... maybe not.
Not really since they are always pushing for more wars.
For the actual problem, I fear this can't be solved by warning people, the pain will need to be felt. The system we live in, basically free market capitalism, cannot do anything else except local optimization. Maybe it's for the best, I don't know. The alternative of top down planning wouldn't have this problem, but it would have other problems. I work for a mid size somewhat luxury brand, and the major goal right now is cost cutting and AI for efficiency everywhere instead of using it to create better products or better ways to reach out customers. When I think about who will buy our luxury products if all jobs were optimized out of existence, I don't have an answer, but again I think the pain will need to be felt to change course.
Same thing that happened to the unfortunate Dr. Jekyll!
I mean beyond the obvious hacker news bias.
If you like it nobody will remove it to you as a hobby. But the artisanal aspect of coding as a production mechansim is dying, and it was about time.
In the end of the day, Russia burnt through their entire Soviet stocks in roughly 2-2.5 years, while US spent a very small proportion of theirs and Europe, maybe about half. And now consumption on both sides is similar with expenses on the Western side to feed that machine being almost invisibly small. Nothing bad happened.
Can we stop repeating this nonsense headline please? We did not stop manufacturing things.
Manufacturing is a huge industry in the West. https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...
The US manufacturing sector is the biggest it has ever been. Exports are at all time record highs. The only thing that declined about manufacturing is the jobs. We build way more than we ever did but with far fewer people.
What we did do is decide that basic items aren't worth it. Our capacity is limited, our labor pool is limited, expenses are high, it doesn't make sense to make trinkets when we can make complex high precision parts and devices.
But no, we did not forget how to make things. We chose to use our capacity in a smarter way.
What a bright future!
But the rest is a big no from my side.
"In hindsight" - Southpark, please take over.
What if there was a continuation of producing unused weapons during the last 20 years? "Waste of money", "Old tech", "useless" - dilemma.
Also the generalization is awfully misleading: "The west".
Let's say all are suffering from military dementia the same way. Who do think has an easier time to recover? USA or Europe? Europe relied and relies or freeloads on USA in especially military affairs.
As you wrote: some veterans teach building, handling cruse missiles to young guns like having an exciting time with the boy scouts.
Germany? "Never again! Demilitarize Germany." Decades long hatred towards USA was pretty much summed up with the slur "Ami go home!" which was a phrase used to protest US military bases in Germany - and then, when most of them finally left, it was all just fun and games (losers).
So USA has some sort of infrastructure and intellectual property to recover and never stopped treasure it as part of the country's history: Veterans' Days, Unknown Soldier, Arlington - Hegseth did a great job stopping the decline here.
Meanwhile Europe: You couldn't have a hold out in secrecy. Some enquete commission would investigate, and addresses would be leaked and people doxed.
Have a look at the representatives of the Germany Army: overweight nice guys. Sorry to say, but I think there is something wrong with this picture.
Europe has nothing to restart. They never had in the first place. Many tend to forget that the US provided massive supply to all allies during WW2. Russia would have been wiped out if it wasn't for the US logistics and money. After the war there was a joke told by survivors of the Eastern front: The first Sherman got shot on the Eastern front not the West.
Europe was always on life support. France military forces outnumbered Germany at the start of WW2. But they were tired and instead of fighting build a wall so to say. Netherlands and Denmark was taken without any resistance.
And it is the same for programming. How many European companies dominate globally like FAANG? Exactly. None. 30 years of Internet and it is getting lonely at the top for the US.
"The West"? Nope.
During the 80th, while Chucky Cheese was all the rage, in Germany you got massively socially ostracised for showing your interest in computers. Playing electronic handhelds put you up on notice by teachers, demanding correction by the school administration - true stories.
Another one: What do all FAANG like companies have in common? The founders and top managers have a background in CS. What do European managers have in common? They haven't heard of CS so far.
Europe is a mess. US is maybe having a cold start but gets its shit done.
Germany killed of its industrial sector. Energy producers as well. Germany does what Morgentau had in mind but what off the table: no more wars and weapons, just farmers and horses.
USA is save in every regard. It is not that something has been lost. This happens or why do we don't know anything about Rome?
You have to distinguish recovering from losing. Once you were at the top, at least you know how to get there while others in most cases will never get there.
These are different abilities: conserving knowledge and rebuilding it. USA needs to reactivate, while Europe needs to build from the ground up without any starting point - without money, energy, moral support, nothing.
USA is already the winner here. And this pattern keeps repeating. 250 years and what we have is an epoch were USA saw kingdoms rise and fall, USA is the only constant there is.
Treasure it. You are in a save spot despite all the dire circumstances. A blessing in disguise.
?
Putin's propagandist, or just useful idiot.
Right now, silicon dominance is what's keeping silicon valley afloat. that and the power of the American consumer base. The world is having to adapt to not relying on the US for consumption due to tariffs among other things. Not only that, attempts to curb competition from China by restricting chip exports, and imports of their tech (I don't disagree in principle with either) has led them to be more self-reliant and invest more on domestic R&D.
All this to say, there is no way around winning, and the fact of the competition is also real. You can't deny the competitive advantage proper use of LLMs brings. It's also hard to deny the destructive power of LLMs to societies.
In China, companies are heavily regulated by the state. This means being competitive against the west is a state matter, it also means harming citizens is somewhat tolerated if the economic benefit to the whole country is good, but companies chasing their own profit at the expense of the public good isn't tolerated. I don't agree with their way of doing things, but the only thing limiting their victory over the west is their hesitation and intolerance to all things outside of the SE-Asian sphere of influence. But then again, the anti-migration trend of the US also removes that slight technical advantage the US always held.
There are many problems that can't be solved by LLMs, and expecting developers' value to be the number of lines they type is silly. It doesn't matter so much if you use LLMs or don't use them, what matters is results. Westerners attitude in general is to resist LLMs. This is partly a result of (in my opinion), not realizing that there is non-western competition. It is absolutely possible to use LLMs to ship high-quality, performant and secure code, you just don't take the dumb approach of letting LLMs do everything and a human "reviews it"; how exactly depends on each development team and company.
Keep in mind, that for decades outsourcing developers offshore -- where usually sub-par code is tolerated because of lower cost to ship -- has been a prevalent trend. If companies can't get Western devs to learn to use LLMs, then they can just ship it offshore to companies that do use it. That didn't lead to the west forgetting to code, and LLMs won't either.
What will hopefully happen is you'll get less developers learning to code, which means the developers that do the work, will get paid better (it's been on the downturn) so long as they learn to sue LLMs.
What people are having a hard time coping with, is the expectation of needing armies of developers to get things done being an antiquated concept. Computers, and then the internet have done this to many industries. You used to have lots of travel agents in the past, you still do, but very few.
The bigger issue is refusal to learn from history. Concepts like capitalism, communism, market economy, centrally planned economy, etc.. are like half a century out of date. There was no "capitalism" 200 years ago (not in so many words at least). Economists and politicians aren't catching up to changes in technology. Historically, adapting to these changes has been brutal.
I won't claim to predict what will happen, but one way or the other, LLMs won't go away in response to resistance from western workers, similar to how other changes in tech didn't go away like that. Economies will have to adapt or get decimated until they do. In the mean time, there is ample opportunity for the dominance of the west to fade within our lifetime, should that opportunity be taken advantage of by the competition. If China starts being less dependent on local companies, and starts importing a lot more, they can displace US and EU consumption needs, and perhaps even force the west to be producers for their domestic demand. unregulated western companies (from Coca cola to Disney!) have been trying to achieve just that, because of the large earning potential in China. But again, China could take advantage of all that, they could have more influence over the west, but they're too inward thinking. They're so afraid of relying on a hostile west, they're preventing the west from becoming reliant on them completely. But this new image of an ineffective and declining US/West, and perhaps some success over Taiwan, and establishing a solid non-western global trade economy can give them that extra confidence?
There will be always a room for good developers.
With all due respect, but many european taxpayers help pay for Ukraine. I am not disagreeing on the premise of the West killing itself via systematic recessions - Trump invading Iran leading to inflation as an example - so a lot of things are going on that show a ton of incompetency both in the USA and the EU, but at the same time I also get question marks in my eyes when this criticism comes from a country that receives money from others. That money could instead go to make EU countries more competitive, for instance. I am not saying this should necessarily be the case, mind you; I fully understand the nature of Putin's imperialism. But we need to really consider all factors when it comes to strategic mistakes with regards to production - and that includes taking up debts all the time. There are always a few who benefit in war, just as they benefit from subsidies from taxpayers (inside and outside as well).
Yes. https://www.eeas.europa.eu/delegations/united-states-america...
You are, of course, free to disagree and make your point, but ignoring the argument does not advance the discussion.
Factually correct.
> We are benefactors of the Ukrainians' bravery and sacrifices.
Who's we?
> How much money could we have not spent if Hitler had been stopped in Czechoslovakia?
Very different situation, in all aspects.
People come and go at rates that would not be sustainable in any manufacturing business.
No, every time people switch knowledge gets lost and code quality degrades.
In part I blame accounting rules justifying investments is easier than maintenance.
Just classified ads or e-commerce platforms such as gumroad and shopify are complex enough that a single person cannot master them end to end. The domain is huge to master and takes a lots of time to master.
The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)
There are plenty of NLAWs (much cheaper than Javelin, and only slightly less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.
And now to give credit where credit is due:
Looking at his (assumed) LinkedIn profile: https://www.linkedin.com/in/denjkestetskov/
It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I too would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.
As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...