It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
Yes, and they work really well for small side projects that an exec probably used to try out the LLM.
But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.
Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
My analogy these days is a screwdriver. Let's ignore screw development for now.
The first screwdrivers, which we still use, are slotted and have a habit of slipping sideways and jumping (camming out). That's err before LLMs ... something ... something.
Fast forward and we have Philips and Pozi and electric drivers. Yes there were ratchet jobs, and I still have one but the cordless electric drilldriver is nearly as magical as the Dr Who sonic effort! That's your modern LLM that is.
Now a modern drilldriver can wrench your wrist if you are not careful and brace properly. A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
Now I only use Chat at the totally free level but I do run several on-prem models using ollama and llama.cpp (all compiled from source ... obviously).
I love a chat with the snappily named "Qwen3.5-35B-A3B-UD-Q4_K_XL" but I'm well aware that it is like an old school Black and Decker off of the noughties and not like my modern De Walt wrist knackerers. I've still managed to get it to assist me to getting PowerDNS running with DNSSEC and LUA and configuring LACP and port channel/trunking and that on several switch brands.
You?
I really think a lot of folks were conned by a smooth operator and a polished demo, so now everyone has to suffer though having this nebulous thing rammed down our throats regardless of its real utility because people with higher pay grades believe it has utility.
It feels like a lot of “AI is inevitable; you are failing to make this abundant future inevitable by your skepticism.”
Like what - the world's most advanced blowjob?
For the record all your prompts are tracked and easily viewable by whoever oversees it at your company. Don't prompt more than you have to and certainly don't give it your best ids. This is value at scale.
Yes, we have craftsmanship, but at the end of the day everything is ephemeral and impermanent and the world continues on without remembering us.
I think both the IC and executive are correct in superposition.
I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe current limitations. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
I was explaining this to my wife, who asked, why doesn't the CEO understand the limitations and the drawbacks the programmers are experiencing. And I said—he doesn't care, because he's looking at what other businesses are doing, what they're writing about in Bloomberg and WSJ, what "industry best practice is", and where the money is going. Trillions of dollars are going in to revolutionizing every industry with AI. If you're a CEO and you're not angling to capture a piece of that, then the board is going to have some serious questions about your capability to lead the company. Executives are often ignorant of the problems faced by line workers in a way perhaps best explained by a particular scene from Swordfish (2001): "He lives in a world beyond your world..." https://www.youtube.com/watch?v=jOV6YelKJ-A The complaints of a few programmers just don't matter when you have millions or billions of capital at your command, and business experts are saying you can tenfold your output with half the engineering workforce.
Right now there are only two choices for programmers: embrace generative AI fully and become proficient at it. Instead of surfacing problems with it, offer solutions: how can we use AI to make this better? Or have a very, very hard time working in the field.
I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
E.g. when Jensen Huang said that you need to pair your $250k engineer with $250k of tokens.
It massively boosts my efficiency as just reading the code myself would take days.
And with LLMs also more context and token usage and cost.
The manager class sees worker units as fungible.
ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.
Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
The more appropriate tools for ICs are torches and pitchforks.
No, they are captured disproportionately by the haut bourgeois capitalists. The two groups overlap to an extent (when major capitalist are nominally employed by a firm they invest in, it is usually as an executive), but executives qua executives (that is, in their role as top level managerial employees) are not the main beneficiaries of increased productivity.
After that programmers fell into the situation you are describing - relatively high bargaining power and salaries. Hopefully now with the push for AI we will finally see another pro labor organisation effort !
It isn't this. This is the executive's misinterpretation.
Why is this supposed to be a good thing?
I _could_ do my job without AI, but it would take twice the time and I would feel miserable having to type out every single character like a caveman.
Just AI auto complete alone is a massive life changer. It reduces my typing at least in half, and is highly accurate to what I want to write.
Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
They just create even more slop currently, which will be the case until someone realizes they aren't needed to produce slop at all.
And plenty of prolific programmers are writing publicly about their Ai use.
I find people tend to omit that on HN and folks dealing with different roles end up yelling at each other because those details are missing. Being an embedded sw engineer writing straight C/ASM is, for instance, quite different from being a frontend engineer. AI will perform quite differently in each case.
ICs worry about doing their job (either doing it well because they care about their craft, or doing it good enough because they need to pay bills). AI doesn't really promise them anything. Maybe they automate some of their tasks away, but that just means they will take on more tasks. For practically any IC, there is no increase in wealth nor reduction in labor time. There is only a new quiet lingering threat that they might be laid off if an executive determines they're not needed anymore.
That's the difference in enthusiasm about AI.
Thoughts and idea as in "I will implement this in this structure, with these tradeoff, and it will work with these 4 APIs and have no extra features and here's how I'm (or LLM with tools is) going to run it and test it".
Thoughts and idea not as in "build facebook" - a lot of people think AI can do that, it won't (but might pretend to) and it will just lead to failure.
My competitive edge did not diminish, it expanded.
Reality check: LLMs are available to everyone, dev or otherwise, so your 'competitive edge' is indeed diminished if you believe LLMs are all that.
I believe that it's pretty close to the article thesis, just more prosaic.
And yes, the AI works great for some programming tasks, just not for everything or completely unsupervised.
(Yeah, I know, there's lots of instances of execs who got paid huge amounts of money and delivered abysmal results...)
That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.
The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.
I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
Developers use it, for groking a codebase, for implementing boilerplate, for debugging. They don't need juniors to do the grunt work anymore, they can build and throw away, the language and technology moats get smaller.
The value of low level managers, whose power came from having warm bodies to do the grunt work, diminishes.
The bean counters will be like when does it pay for itself. Will it? IDK, IDC.
I know there's an attempt to shift the development part from developers to other laypeople, but I think that's just going to frustrate everyone involved and probably settle back down into technical roles again. Well paid? Unclear.
Look, I know that we like poking fun at some people but generally I haven't seen execs saying this.
Executives do not need actively functional systems from AI to help with their own daily work. Nothing falls over if their report is not quite right. So they are seeing AI output that is more complete for their own purposes.
But also, AI is good enough to accelerate software engineering. To the degree that there are problems with the output, well, that's why they haven't fired all the the engineers yet. And executives never really cared about code quality -- that is the engineers' problem.
What I'm trying to build for my small business client right now is not engineering but still requires some remaining employees. He's already automated a lot of it. But I'm trying to make a full version of his call little center that can run on one box like an H200. Which we can rent for like $3.59/hr. Which if I remember correctly is approximately the cost of one of his Filipino employees.
Where we are headed is that the executives are themselves pretty quickly going to be targeted for replacement. Especially those that do not have firm upper class social status that puts them in the same social group as ownership.
But I will insist that executives are more driven by FOMO than a teenager.
If you are not, you either have a boring job or do not have any ideas that are worth prototyping asynchronously. Or haven't tried AI in the last ~3 months.
But I suppose it depends on what you consider fun. I genuinely know people who love to meticulously write many many unit tests. I think that's great as a craft, but you probably can not expect to get paid for it, similar to how you likely can not be profitable by selling handmade shirts now unless you are already independently wealthy and well known.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
For executives, that's writing code. For ICs, it's other stuff.
It’s like Marc Andressen bloviating about how AI will replace everyone except him.
To be fair, some of this is understandable. At some level, you’re just going to see some things as a bullet point in a daily/monthly/quarterly report and possibly a 10 minute presentation. You’re implicitly assuming that the folks under you have condensed this information into something meaningful.
It's honestly insane that they think this.
They really don't understand that they're building something they cannot possibly control, if it turns out to be what they're envisioning.
On top of that, places like Amazon extol the virtues of only working on projects that can be completed with entirely fungible staffing and Google tries ever so hard to electroplate this steaming turd of an ideology with iron pyrite calling fungibles "generalists."
So along comes AI coding agents, which I love as an IC because it excels at tedious work I'd rather not have to do in the first place, yet I get why others see it as a threat. But I really think it's no more of a threat than any other empty promise to cut costs with the silver bullet of the month and we just have to let the loudmouths insist otherwise until the industry figures out this isn't a magic black box. They never learn, do they? Maybe their jobs depend on never learning.
Meanwhile executives see the money related numbers go up.
In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...
That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.
Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
Ha! Apparently the author hasn't been asked "how long will it take to code this?" yet... And isn't a common developer complaint that management does not know how to evaluate them, and substitutes things like how quickly a task gets completed, with the result that some guy looks amazing while his coworkers get stuck with all his technical debt?
- You ask someone to do it
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
e: typo
It makes me think of an executive I once reported to who “increased velocity” by changing the utilization rate on a spreadsheet from 75% to 80%.
embedded/cloud/IoT --> AI --> quantum…
When the company originally known as C3 Energy changes their name to C3.quantum, you'll know where on to the next buzzword.
Curious how you verify this behavior would be unique to the West?
I seriously doubt Satya Nadella is sitting down for hours a day to use Copilot to draft detailed documents. He's being fed fantastical stories by his lackeys telling him what he wants to hear.
I’m neither a developer nor an executive, but from my vantage point the software crisis has to do with the fact that software development presents an existential risk to any organization that engages in it. It seems to be utterly resilient to estimation, and projects can run late by months or even years with no good explanation except “it’s management’s fault.” This has been discussed at length. If I had a good answer, “I wouldn’t still be working here” as the saying goes. But half a century after The Mythical Man Month, it still reads like it was written yesterday, and “no silver bullets” seems to ring true.
In my view, the software crisis will be resilient. Throwing more code, or more code per day, at a late project will make it later. There will be a grace period while the pace of coding seems exciting, but then the reality will set in: “We haven’t shipped a product.” And it will be management’s fault.
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
Ic can refer to people leading, without direct reports, making 500k+ in comp.
How? Marx's critique doesn't land here at all.
Executives see this as way to replace labor.
The labor sees themselves being replaced.
This is a story as old as the hills.
Narrator: there is not
But because time is money, I think all the benefits go to the dev. The exec still needs the dev regardless
It accomplished this not simply by eliminating my overpaid bullshit job as parasite attractor; but by putting an end to its pathetic semblance of a premise: building software to be used by, uh, someone? for, uh, something?
The various entities requesting the work (or, in later years, the layers of barely-sentient intermediaries between me and said entities) were hardly if ever clear on how exactly this was supposed to produce value; but now they're free, too! Free from having to even try to understand how answering that question is relevant, emdash - so in the end it worked out for them as well!
I am finally at liberty to do something worthwhile with my life, and while at this point I realize it'll take me some time to even remember what "worthwhile" even was (or whether such a thing still exists in your imaginary world of personalized sensory bubbles), I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance. Seriously, I was looking at my bank account and getting a little worried.
I am told that mine is a minority position: if you happen to be the kind of person who believes that more is better, no matter more of what, rest assured you and your eventual progeny will be quite safe - for a while, anyway - in your new role as AI trainer (or is it AI fodder, let's let the market decide!)
Well, turns out when we are all busy looking the part, it becomes impossible for anyone to actually play the part; but also nobody notices, so this is fine too!
Just one request on my part: if possible, do shut up while figuring out how to better turn yourself and our world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this here planetation - and I hear they find all the AI blather to be mighty annoying.