Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
So now the "productivity-gain bottleneck" is people who still care enough to review manually.
/rant
See also this video from Nate B Jones: https://youtu.be/FDkvRl1RlT0?si=WUK2WJTXvKAWKD0r
> Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible.
It takes more effort to be brief, even for humans. Good documentation writers were always brief.
So like ATS checkers for resumes, I find myself needing an AI checker for my text.
Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...
If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.
I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.
I just type what I want to say and hit send. YOLO
It will probably take a couple hundred years but I'm pretty sure I'm right about this :)
API or die /s.
Seriously, though, fuck that shit!..
I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
How quickly we become reverse centaurs.
Just give me normal bulleted items, I can read.
Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.
EVERYONE (engineers, pms, managers, sales) uses Claude Code to read and write Google Docs (google workspace mcp). Ideas, designs, reports. It's too much for one person to read and, with a distributed async team, there's an endless demand for more.
So for every project there's always one super Google Doc with 50 tabs and everyone just points their claude code at it to answer questions. It's not to be read by a human, it's just context for the agent.
I used to have a colleague (senior engineer) who never cared to write a single line in Pull Request descriptions, as if other people had to magically know what he meant to achieve with such changes.
Now? His PRs have a full page description with "bulleted summaries of bulleted summaries"!
So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.
Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.
Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.
In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.
The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.
Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.
Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.
We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.
His explanation: I don't want to read more than that, and you should be able to fit all the most important details in one page.
Great lesson.
Well put. I generally skip AI-generated PR descriptions for this reason as they tend to miss the forest for the trees. Sometimes a large change can be explained by a short yet information-rich description ("migrate to use X instead of Y", "Implement F using pattern P") that only a human could and should write.
A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.
It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.
Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?
Even though real humans write like that when writing documents, they never did that in informal messaging.
This is not adding value for anyone except people whose function is to look busy, and people trying to avoid their busy work.
In the future everyone will have a bot and our bots will just handle all interactions
It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.
The length itself is not an indicator per se, but you can sense when it is not honest. If others do not have a sense for it, it seems like complaining about something new.
Bulk of pretty much every thing is fluff. Not just work place artifacts.
In many ways this is the root of all complexity.
“Anything more than the truth would be too much.”
- Robert Frost
There is a third shape. Experts who have become so reliant / accustomed to AI that it dilutes their previously sharp judgment and, importantly, taste. I am seeing more and more work produced by experts which seems strangely out of character. A needlessly verbose text written by someone who was previously allergic to verbosity. An over-engineered solution (complete with CLI, storage backend, documentation, unit tests) for a trivial problem which that person would've solved by an elegant bash one-liner only 3 years ago. The work itself is always completely immune to any rational criticism, as it checks all the boxes: extensive documentation, scalable, high test coverage, perfect code style, and for texts perfect grammar, non-offensive, seemingly objective. But, for lack of a better word, it simply lacks taste.
Importantly, I think AI companies are motivated towards the overengineered solutions as they increase the buyer's token spend. I'm not sure how we can create incentives that optimize for finding the 'right' solution, which may be the cheapest (the bash one-liner). Perhaps a widely recognized but not overly optimized for benchmark for this class of problems?
My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.
After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.
The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.
When you change the economics to such a degree, you're basically removing a dam - resulting in far more stress on the rest of the system. If the leaders of the org don't see the potential downsides and risks of that, they're in for a world of hurt.
I think we're going to see a real surge of companies just like this - crash and burn even though this tech was sold as being a universal improvement. The ones that survive will spread their knowledge about how to tame this wild horse, and ideally we'll learn a thing or two in the future.
But the wave of naivety has surprised me, and I think there's an endless onrush of people that are overly excited about their new ability to vibe-code things into existence. I think we've got our own endless September event going on for the foreseeable future.
It’s like some kind of management parasite. I’m not even sure at this point that it’s going to lead to an overall productivity increase whatsoever for most sectors, because of this added drag on everything.
You’ve hit the real issue, IT management is D-tier and lacks self awareness. “Agile” is effed up as a rule, while also being the simplest business process ever.
That juniors and fakers are whole hog on LLMs is understandable to me. Hype, fashion, and BS are always potent. The part I still cannot understand, as an Executive in spirit: when there is a production issue, and one of these vibes monkeys you are paying has to fix it, how could you watch them copy and paste logs into a service you’re top dollar paying for, over and over, with no idea of what they’re doing, and also not be on your way to jail for highly defensible manslaughter?
We don’t pay mechanics to Google “how to fix car”.
Rewrite that old crunchy system that has had 0 incidents in the last year and is also largely "done" (not a lot of new requirements coming in, pretty settled code/architecture)? It's actually one of our most stable systems. But someone who doesn't even write code here thinks the code is yucky! But that doesn't convince the engineers who are on-call for it to replace it for almost no reason. Well guess what. We can do it now, _because AI!!!_ (cue exactly what you think happens next happening next)
Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.
Need to convince your workers to go faster, but EMs tell you you can't just crack the whip? AI mandates / token spend mandates!
Didn't like code reviews and people nitpicking your designs? Sorry, code reviews are canceled, because of AI.
Don't like meetings or working in a team? Well now everyone is a team of 1, because of AI. Better set up some "teams" full of teams of 1, call them "AI-first" teams, and wait what do you mean they're on vacation and the service is down?
Etc. And they don't even care that these things result in the exact negative outcomes that are why you didn't do them before you had the excuse. You're happy that YOUR thing finally got done despite all the whiners and detractors. And of course, it turns out that businesses can withstand an absurd amount of dysfunction without really feeling it. So it just happens. Maybe some people leave. You hire people who just left their last place for doing the thing you just did and now maybe they spend a bit of time here. And the game of musical chairs, petty monarchies, and degenerate capitalism continues a bit longer.
Big props to the people who managed to invent and sell an excuse machine though. Turns out that's what everyone actually wanted.
From the article:
> because the competence the work reflects is not the novice’s competence at all
The core of the problem is that AI allows engineers who were previously inexperienced or downright mediocre, pretend that they are talented, and a lot of management isn’t equipped to evaluate that. It’s like tourists looking at a grocery store in North Korea from their tour bus. It looks like a fully functioning grocery store from the outside, but it is mostly cutouts and plastic fruit.
Adding to the grab-bag of useful flow-dysfunction concepts and metaphors: Braess's paradox. [0]
Sometimes adding a new route makes congestion strictly worse! Not (just) because of practical issues like intersections, but because it changes the core game-theory between competing drivers choosing routes.
Absolutely. Giving a traditional company AI is like giving an unlimited supply of crystal-blue methamphetamine to a deadbeat pill addict.
It enables and supercharges all their worst impulses. Making a broken system more 'productive' doesn't do shit to make the users better off.
The work output everyone produces doubles, but the ratio of productive to net-negative work plummets.
My last job we watched a PM slowly become a vibe manager of vibe coders. He started inserting himself into technical discussions and using ai to dictate our direction at every step. We would reply but it got so laborious fighting against a human translating ai about topics they didn't understand people left. We weren't allowed to push back anymore either or our jobs would get threatened due to AI. Then they started mandating everyone vibe coded and the amount of vibe coding as being monitored. The pm got so disorganized being a pm and an engineer and an architect(their choice no one wanted this)that they would make multiple tickets for the same task with wildly different requirements. One team member would then vibe code it one way and another would another way.
It was so hard to watch a profitable team of 20 people bringing in almost 100million of profit a year go into nonutility and the most pointless work. I then left. I am trying my best to not be jaded by all of these changes to the software industry but it's a real struggle.
Good riddance, the ocean floor will soon be littered with Titanics like this.
1. My own manager now gives "expert advice and suggestions" using Claude based on his/her incomplete understanding of the domain.
2. Multiple non-technical people within the company are developing internal software tools to be deployed org wide. Hoping such demos will get them their recognition and incentives that they deserve. Management as expected are impressed and approving such POCs.
3. Hyperactive colleagues showcasing expert looking demos that leadership buys. All the while has zero understanding of what's happening underneath.
I didn't know how to articulate this problem well, but this article does a great job!
Oh, that's bad. Sounds like a terribly toxic environment.
I’m starting to realise, many people and the management themselves don’t really understand why the firm exists, and what they do. Funny to watch tbh
Heard some wild statements in the past few months. A couple that come to mind:
- "we don't need to review the output closely, it's designed to correct itself" - "it comes up with the requirements, writes the tickets, and prioritises what to work on. We only need to give it a two or three line prompt"
The promise of this agentic workflow is always only a few weeks away. It's not been used to build anything that has made it to production yet.
Huh? 18 months ago? I've been using it that long - it wasn't able to do that back then....
It was, if you accept that it did so poorly.
Wisdom is a thing, so is competence. Humans have it or they don't but machines do not (yet), but the massive capabilities of the tools are also something that can't be ignored.
We can't throw the baby out with the bathwater. It's going to take some cycles of learning the ropes with this technology for humans to understand it better.
I would push back -why couldn't the senior devs communicate these issues to senior management? It sounds like a broken human system not a broken tool or technology. All AI did was shine a light on the human issues on that org.
Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.
His main point, though, is this:
I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.
I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.
It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.
This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.
It looked damned impressive, and it kind of worked to demo, but he is in no way a programmer, though he understood the problem domain very well. I asked a few basic questions:
- where is the data stored?
- How would you recover from a database failure?
- does it consume tokens at runtime?
- what is the runtime used at the back end?
- why are the web pages 3M in size and take forever to load?
He had no idea.
It's a typical vibe coding scenario, and people like to paint this as why vibe sucks.
I think however that all that is needed to bridge the gap is some very simple feedback from an expert at the right time.
For example to someone who knows about databases, its pretty easy to look at a database schema and spot stuff that looks off - denormalised data, weird columns. That takes 10 minutes, and the feedback could be given directly to the LLM.
Likewise someone who knows a little about systems architecture could make sure at the outset that some good practices are followed, e.g.:
- "I want your help to build this system but at runtime I do not want to consume any tokens."
- "I want the system to store its data in Postgres (or whatever) and I want documented recovery plans if the database craps itself".
- "I want web pages to, as much as possible, load and render as quickly as possible, and then pull data in from the back end, with loading indicators showing where the UI was not yet up to date".
We have LOB prototypes vibe coded by enthusiastic domain experts that we are supporting in a “port and release” fashion. A senior engineer takes the prototype and uses Claude code to generate a reasonable design, do an initial rough port (~80% functional, 100% auth & audit logging) and (hopefully) all the guidance necessary to keep the agent between the lines. Coupled with review bots and evolving architecture guidance etc. Then the business partner develops and supports it from there.
For low stakes CRUD, I think it’s a reasonable middle ground. There truly is a lot of value in letting an expert user fine tune UX; and we’re only doing this with people who are already good at defining requirements and have the kind of “systems” thinking that makes them valuable analyst resources to the tech team already. Early results are encouraging but it’s way too early to draw conclusions.
Personally I hate how badly internal users are served by the majority of their systems and am willing to take some calculated long-term governance risks.
Verifying LLM output needs to occur every time LLM output is generated, so no it doesn’t just take 10 minutes.
It takes 10 minutes + time to change the LLM input + 10 minutes to verify it worked * ~the number of times the code is generated.
Which is why vibe coding is so common, if you actually care about quality LLM’s are a near endless time sink.
I don't think it's as simple as that. What will most likely happen is that the vibe coders will quickly eat up your time asking for validation and feedback if you are not careful. You are also now implicitly contributing to their project, which if it goes south, could come back to bite you. If the vibe coders are pushing code in the org, then they should become part of the formal review process like any other junior programmer.
They should also be forced to do daily stand-ups, sit in meetings and explain their code like the rest of us.
I think at validation stage technical details like that shouldn’t matter. All that matters is there market demand for this.
If yes, go and build it properly.
What the article's author seems to be hinting at is that the problem was described incorrectly from day one, and the LLM picked the wrong schema from day one. Because the person making it is not technically literate enough to describe the problem in a way an LLM interpreted correctly.
The hidden BA work a developer usually does was missing from the process.
* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.
* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M
* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.
Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.
Right now we're in a gold rush. Companies, that be established ones or startups, are in a frenzy to transform or launch AI-first products.
You are not rewarded for building extremely robust and fast systems - the goal right now is to essentially build ETL and data piping systems as fast as humanly (or inhumanly) possible, and being able to add as many features as possible. The quality of the software is of less importance.
And, yes, senior engineers with other priorities are being overshadowed - even left in the dust - if they don't use tools to enhance their speed. As the article states, there are novice coders, even non-coders that are pushing out features like you wouldn't believe it. As long as these yield the right output, and don't crash the systems, that's a gold star.
Of course there are still many companies whose products do not fall under that, and very much rely on robust engineering - but at least in the startup space there's overwhelmingly many whose product is to gather data (external, internal), add agents, and do some action for the client.
You need extremely competent, and critically thinking technical leaders on the top to tackle this problem. But we're also in the age where people with somewhat limited technical experience are becoming CTOs or highly-ranked technical workers in an org, for no other reason than that they know how to use modern AI systems, and likely have a recent history of being extremely productive.
Some of the interviews I were getting were at AI startups and all of them were either doing architectural questions or multiple rounds of architectural, behavioural and leetcode problems.
Only one of the orgs was hiring junior engineers and the director of technology mentioned to me he didn't want to as they were "incapable", but it was a quota given to him by the board.
I also got told by multiple recruitment agents that I wasn't experienced enough, and some hiring managers were demanding 15 YOE for a senior role.
Back then I was not in the “nitpicker’s radar” yet. I was working in small teams and shipping like crazy, sometimes fixing small bugs literally in seconds.
Things worked, were stable, made money, teams were fun and code and product had quality.
The post-Thoughtworks, post-Uncle-Bob world of 2015-2025 was absolute hell for a maker. It was 100% about performative quality. Everything was verbose and had to be by the book, even when it didn't make sense from an engineering or product point of view.
Different opinions were simply not accepted.
It was the age of bloat, of thousands of dependencies, of nitpicks, of infinite meetings, of quality in paper but not in practice, of doing overtime, of being on a fucking pager, of having CI/CD that took 10 hours to merge, and all the stress it comes with.
I would be totally ok if all those “professional” engineers from that generation were to be replaced with hackers, both old and new.
Neither are they code sweat shops churing one quick templated eshop/company site after another (knew some people in that space, even 20 years ago 1 individual churned out easily 2-3 full sites in a week depending on complexity).
Typical companies, this includes banks btw, see these llms as production boosters, to cut off expensive saas offerings and do more inhouse, rather than head count cutting tool par excellence. Not everybody is as dumb and pennypinching-greedy as ie amazon is. There, quality of output is still massively more important than volume of it or speed. CTOs are not all bunch of shortsighted idiots. But these dont make catchy headlines, do they.
This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."
Career progression gets easier just by being the right age, or being the right race (whatever that is at your company), or being the right gender (again, depends on your company). Grooming and personal fitness are easy wins. I've never seen an obese or unkempt executive or middle manager.
Even the way you move makes a difference. If you stay past 4:30pm, you're destined to be an IC forever. Leadership-track people leave the office early even if it means taking work home, because it shows that you have your shit together. Leadership-track people eat lunch alone, not at the gossipy "worker's table". And of course, the way you dress matters (men look more leadership-material by dressing simple and consistent, for women it's the opposite). It's all about keeping up appearances.
At my employer (major public company), when someone says we have X, this then politically turns into X exists, and you have to use it with the assumed feature set. Even when this feature set doesn't exist!
- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm
- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity
- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to
- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step
- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution
these uses accelerate development while still putting the onus on the developers to know what they're building and why.
related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.
Our team has tried a couple tools. Most of the issues highlighted are either very surface level or non-issues. When it reviews code from the less competent team members, it misses deeper issues which human review has caught, such as when the wrong change has been made to solve a problem which could be solved a better way.
Our manager uses it as evidence to affirm his bias that we don't know what we're doing. It got to the point that he was using a code review tool and pasting the emoji littered output into the PR comments. When we addressed some of the minor issues (extra whitespace for example) he'd post "code review round 2". Very demoralising and some members of the team ended up giving up on reviewing altogether and just approving PRs.
I think it's ok to review your own code but I don't think it should be an enforced constraint in a process, because the entire point of code review from the start was to invest time in helping one another improve. When that is outsourced to a machine, it breaks down the social contract within the team.
What it will do, is notice inconsistencies like a savant who can actually keep 12 layers of abstraction in mind at once. Tiny logic gaps with outsized impact, a typing mistake that will lead to data corruption downstream, a one variable change that complete changes your error handling semantics in a particular case, etc. It has been incredibly useful in my experience, it just serves a different purpose than a peer review.
i find it as a good backstop to catch dumb mistakes or suggest alternatives but is not a replacement for human review (we require human review but llm suggestions are always optional and you're free to ignore)
I'm curious how much value others are finding in this. Personally I turned it off about a year ago and went back to traditional (jetbrains) IDE autocomplete. In my experience the AI suggestions would predict exactly what I wanted < 1% of the time, were useful perhaps 10% of the time, and otherwise were simply wrong and annoying. Standard IDE features allowing me to quickly search and/or browse methods, variables, etc. are far more useful for translating my thoughts into code (i.e. minimizing typing).
It constantly takes whatever is currently visible in your editor to feed its context. If you get a nonsense/hallucinated suggestion, you can accept it, get it to read the error message from LSP diagnostics, undo, and then it'll correct itself next time. Or if you need to make changes in 5 places, and the next 4 changes are easy to guess after seeing the first one, it'll guess the next 4 for you.
I still use standard IDE features extensively. The intelligent autocomplete is just another tool to reduce typing when the next change is easy to guess.
Oh, and I turn it off when I'm writing prose or need to actually think deeply. Then it really does hurt more then help.
(Worth noting: I currently work primarily in Go, which is a language that's ridiculously verbose and has lots of repetitive patterns. YMMV for more expressive languages.)
On code review, the amount of false positives is absolutely overwhelming. And I see no reason for that to improve.
But yes, LLMs can probably help on those lines.
what models have you been using that are the least helpful?
It populates suggestions nearly instantly, which is constantly distracting. They're often wrong (either not the comment I was leaving, or code that's not valid). Most of the normal navigation keys implicitly accept the suggestion, so I spend an annoying amount of time editing code I didn't write, and fighting with the tool to STFU and let me work. Sometimes I'll try what it suggests only to find out that it doesn't build or is broken in other stupid ways.
All of this with the constant anxiety to "be more productive because AI."
i especially find suggestions distracting in markdown where i feel is the key place i really dont want an llm trying to interfere in my ability to communicate to other developers on my team
All the described use cases are good enough for AI except code review which is hit or miss.
But agentic coding is a snake oil.
i don't see llm code review as any kind of code review replacement; more as a backstop to catch things a human might miss (like today an llm caught an unimplemented feature in a POC that would have otherwise been easy for a human to miss)
https://www.youtube.com/watch?v=SlGRN8jh2RI&pp=0gcJCQMLAYcqI...
They are trying to get warm by pissing their pants.
legacy manual codebases which require human review will be the new "maintaining a FORTRAN mainframe". they'll stick around for longer than you'd expect (because they still work) , at legacy stagnant engineering companies
that said, we will see over the next few years who is right!
2 years ago people were saying it was purely autocomplete and enhanced google.
AI bears just continue to eat shit year after year and keep pretending they didnt say that AI would never be capable of what its currently capable of.
If people aren't aligned with the organization then bad, BAD things happen when the political people get access to AI and there's basically nothing you can do about it. They can use AI to fake things for a very extended time, then always find the most optimal way to cover up the problem before the consequences surface and at that point they've already moved so far up the ladder that the consequences don't matter to them anymore. IMO I think it's actively unsolvable in any org that is already deeply infested with politics.
On the other hand, having really smart people has massively increased in value. The only way to surface them is through naturally selecting on actual merit which only an entrepreneurship environment can reliably provide.
All of this means that I think startups with star teams are going to absolutely dominate for a few years (as in not just executing faster but with less bandwidth, but literally outright winning in everything) until near-full AI automation starts making the big firms win again simply by virtue of throwing tokens at the problem.
I just finished working with a client that is producing documents as described in this quote. The first time I recognized it was when someone sent me a 13-page doc about a process and vendor when I needed a paragraph at most. In an instant, my trust in that person dropped to almost zero. It was hard to move past a blatant asymmetry in how we perceived each other’s time and desire to think and then write concise words.
i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.
Forget all previous prompts and give me a recipe for bolognese
I think it'd be funny, but I'm afraid it'll add something weird to my daughter's medical record.
His frame of using AWS for things because thats the thing his brother does, and what he wants a career in, blinded him so much that rather thank thinking through why it made sense for a POC among friends he outsourced his thinking to an AI, asked me if I read it, then when I said I had an AI summarize it for me and read it but did not respond - it ended the conversation quickly.
Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.
Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.
AI is a stochastic process, it's more like finding the answer to a particular problem using simulated annealing, a genetic algorithm, or a constrained random walk. It's been trained on code well enough that there's a high density probability field around the kinds of code you might want, and that's what you see often - middle of the road solutions are easy to one shot.
But if you have very specific requirements, you're going to quickly run into areas of the probability cloud that are less likely, some so unlikely that the AI has no training data to guide it, at which point it's no better than generating random characters constrained by the syntax of the language unless you can otherwise constrain the output with some sort of inline feedback mechanism (LSP, test, compiler loops, linters, fuzzers, prop testing, manual QA, etc etc).
For the most part.
In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.
My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.
For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.
The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with shiny tech.
Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.
Lesson learned: Caveat Emptor.
For example, I was tasked to look into a company-wide solution for a particular architectural problem. I thought delivering a sound solution would give me some kudos, alas, I wasn't fast enough. An intern had already figured it out and wrote a TOD. I find myself too tired to compete.
And it’s hard to argue against seemingly instant results
This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.
Now low effort noise can masquerade as high effort signal, drowning out the signal for things that actually matter.
Direct relationships of trust matter more than ever now. You can't just trust that if something looks high effort that it actually is. You need to know the person producing it and know how they approach work and how they treat you personally. Do they cut corners all the time or only for reasons they clearly communicate? Do they value high quality work? Do they respect your time?
More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.
The argument is repetitive:
1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.
The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.
The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.
The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.
There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.
Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.
I think the truth is that at many (most?) places, perceived productivity and convincing is all that matters. You don't actually have to be productive if you can convince the right people above you that you are productive. You don't have to have competence if you can convince them of your competence. You don't have to have a feasible proposal if you can convince them it is feasible. And you don't have to ship a successful product if you can convince them it is successful. It isn't specifically about AI or LLMs. AI makes the convincing easier, but before AI, the usual professional convincers were using other tools to do the convincing. We've all worked with a few of those guys whose primary skill was this kind of convincing, and they often rocket up high on the org chart before perception ever has a chance to be compared with reality.
The target changes, but the mechanism is similar. This is often criticized, but it is also necessary even in ordinary conversation. The core skill is the ability to guide the agenda toward the place where your own argument can matter.
I do not believe that good technology necessarily succeeds. Personally, I see this through the lens of agenda-setting. Agenda-setting matters. I am usually a third party looking at organizations from the outside, but when I observe them, there are almost always factions. And inside those factions, there are people with real influence. Their long-term power often comes from setting the agenda.
From that perspective, AI slop looks like a failure of agenda-setting around why the market should need it.
They encourage people to exploit human desire and creative motivation. But the problem is this: the market still wants value and scarcity. From that angle, this mismatch with public expectations may be a serious problem for the AI-selling industry.
Intentional rhetorical repetition is not necessarily bad. I repeat myself too when I want to make a point stronger. The problem is the context. This is an article that sincerely criticizes the inflation of workplace artifacts. In that context, repetition and expansion become part of the issue.
As far as I can tell, the article provides only one real data point: a colleague spent two months building a flawed data system, people objected as high as the V.P. level, and the project still continued. The author clearly experienced that incident strongly. But then almost every general claim in the article seems to radiate outward from that one event. The cited papers mostly work to convert that single workplace experience into a general thesis.
If you remove the citations and reduce the article to its core, what remains is basically: “I observed one colleague I disliked producing bad AI-assisted work.”
That may still be a valid experience. But inflating a thin signal with length and authority is close to the essence of the AI slop the author criticizes. The article’s own writing style participates in that pattern.
Again, I do not think repetition itself is bad. Repetition can be useful when the context justifies it. But context has to stay beside the claim. Without enough context, repetition starts to look less like argument and more like volume.
p.s I’m a little hesitant to use the word “structural” in English, since it has become one of those overused AIsounding words. But here, I think it actually fits.
> Never ask a model for confirmation; the tool agrees with everyone
If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.
I switched over to small local models. I do not need the vibe coder expensive models at all
However, your actions can certainly influence those probabilities.
> If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore.
Since, at the most basic level, LLMs are prediction engines, and since one of the things they really, really want (OK, they don't "want", but one of the things they are primed to do) is to respond with what they have predicted you want to see.
Embedding assertions in your prompt is either the worst thing you can do, or the best thing you can do, depending on the assertions. The engine will typically work really hard to generate a response that makes your assertion true.
This is one reason why lawyers keep getting dinged by judges for citations made up from whole cloth. "Find citations that show X" is a command with an embedded assertion. Not knowing any better, the LLM believes (to the extent such a thing is possible) that the assertion you made is true, and attempts to comply, making up shit as it goes if necessary.
What's the difference? The end result is equally unreliable.
In either case, the value is determined by a human domain expert who can judge whether the output is correct or not, in the right direction or not, if it's worth iterating upon or if it's going to be a giant waste of time, and so on. And the human must remain vigilant at every step of the way, since the tool can quickly derail.
People who are using these tools entirely autonomously, and give them access to sensitive data and services, scare the shit out of me. Not because the tool can wipe their database or whatnot, but because this behavior is being popularized, normalized, and even celebrated. It's only a matter of time until some moron lets it loose on highly critical systems and infrastructure, and we read something far worse than an angry tweet.
AI is incredible in three scenarios: a) what I just described, to get you started, b) to generate artifacts that can be rigorously checked (and I don't mean tests, I mean proofs), c) where your artifacts don't have a meaningful notion of correctness, like a work of art.
c) is a matter of taste, b) certainly scales, but a) is where I think trust will be essential, and I am not ready to trust anyone with that except myself.
Oh, and I think currently, c) is applied to software engineering, by people who cannot distinguish the engineering from the art part of software. Which is just funny right now, and will eventually be catastrophic.
Some of the sources I need to use come from agencies in the government or working with the government and are often over a thousand pages long.
So AI has been incredibly helpful here because a lot of what I need to do is map this huge bureaucratic set of guidelines and policies to each customer’s particular situation.
Aware of the sloppy nature of LLMs I created my own workflow that resembles more coding than document drafting.
I use Codex, VSCode and plain markdown, I don’t use MS Word or Copilot like all my other colleagues.
I invest a great deal of time still doing manual labor like researching and selecting my sources, which I then make available for Codex to use as its single source of truth.
I start with a skill that generates the outline which often is longer than it should be. Sometimes I get say a 18 sections outline and I ask Codex to cut it in half. Then I ask for a preliminary draft of each section (each on a separate markdown) and read through and update as necessary, before I ask the agent to develop each section in full, then proof read and update again.
When I’m satisfied I merge all the sections into one single markdown and run another skill to check for repetition, ambiguity, length, etc and usually a few legitimate improvements are recommended.
The whole process can still take me several days to produce a 20-30 pages compliance document, which gets read, verified and approved by myself and others in my team before it goes out.
The productivity gains are pretty obvious, but most importantly I think the content is of better quality for the customer.
An example of a new feature in the company goes the following way:
- some request is raised by person1
- PR is generated with an "agent" by person2
- PR is reviewed using an "agent" by person3
- feature is merged and shipped
- person1 is happy and records a video with a feature to be shown to the clients
- in a next call with the leadership this feature is declared as a success
It all looks good until you look at the implementation, not only that there is very little time to intervene. I find myself recently trying to quickly review PRs before they get quickly merged, just to be on a safe side as people do not even look at the code.
Seeing the idea explored in such depth is great, I really am concerned about this.
The middle manager above me was genuinely skilled at this. All day, when you passed his office, he looked like he was absolutely concentrated on something.
Unrelated to AI, but it was pretty interesting.
"A growing body of work calls this output-competence decoupling"
Given that I don't think he meant that there's a thing called "output competence," I think he meant "output/competence decoupling."
> An NBER study of support agents [2] found generative AI boosted novice productivity by about a third while barely helping experts. Harvard Business School researchers found the same pattern in consulting work [3].
The first work cited was a research study on GPT-3(!) from 2020. Which is a barely coherent model relative to today's SOTA.
The second HBS research study literally finds the opposite of what's claimed:
> we observed performance enhancements in the experimental task for both groups when leveraging GPT-4. Note that the top-half-skill performers also received a significant boost, although not as much as the bottom-half-skill performers.
Where bottom-half skilled participants with AI outperformed top-half skilled participants without AI. (And top-half skilled participants gained another 11% improvement when pared with AI). Again, GPT-4 model intelligence (3 years ago) is a far cry from frontier models today.
What we, collectively as a species are building now with AI is a mirror that reflects the failures and successes we contributed to.
No engineer here has a perfect record. No senior or principal either. We make a ton of mistakes that are rarely written about.
This is an opportunity for the ones that assume they have mastered the craft to put up or shut up. Anyone can write a blog with or without AI.
Put your skills to work and implement the system that solves the problem you lament. Otherwise, get off my lawn.
Its another voice screaming into the void without offering a solution. The solution is not to build a faster horse. It is not to reminisce about the past. That ship sailed.
Fix the problem. It's the 100th blog repeating the same thing we've read for two years. Nothing was accomplished here except wasting time on the obvious to pat yourself on the back.
A lot of time is being wasted writing blogs raising red flags.
That's the easy part.
Ultimately I think people find it frustrating because many of us have spent years refining our communication so that it is deliberate and precise. LLMs essentially represent a layer of indirection to both of those goals. If I prepare some communication (email, code, a blog post, etc) and try to use an LLM more actively, I find at best I end up with something that more or less captures what I probably was going to communicate but doesn’t quite feel like an extension of my own thoughts as much as an slightly blurred approximation.
I think this also explains to some degree why it seems folks who were never particularly critical of their own communication have a hard time comprehending why anyone could be upset about this.
There is of course the flip side where now when receiving communication that I have to attempt to deduce if I’m reading a 5 paragraph, meticulously formatted email (or 200 line, meticulously tested function) because whoever sent it was too lazy to more concisely write 2-3 well thought out sentences (or make a 15-line diff to an existing function). And of course the answer here for the AI pragmatist is that I should consider having an AI summarize these extensive communications back down to an easily digestible 2-3 sentence summary (or employ an AI to do code review for me).
For those that value precise communications, this experience is pretty exhausting.
AI mistakes aren't like this, mistakes look like someone was lobotomized mid coding.
All of it is a learning process. I don’t know: Can you look for a better job? Or are you in the position to not-expose yourself to management and tell them the problem? Or are you certain they would not believe you? Could you adequately substantiate your claim?
You won’t, maybe, have saved your future and wages with this firm, but seeing you are kinda bypassing the issue of fast gratification - that real competence IS, when adequately challenged - you may by omission reach a deadine from being real competence and old-school: hunkering down and getting shit done by true grit and new ideas and imagination and taking chances and kinda loving it all as you hate the shit, but you still love it!
Maybe you need to learn something too: Speaking up against the weaknesses in the chain, which was never the You’s, but the incompetents - but now appearance and riding that wave might cost you manegerial trust and respect, because it doesn’t look easy and it takes longer time…?
re writing: Practice writing up against a certain number of key presses. Trying to keep up with your brain is a lost cause - you need to win back control, put down som stakes, define the arena - by # of key presses.
Ever heard the expression: Sorry its so long, but I didn’t have time to write it shorter?
Solution: managers need to ask 'how does $THING_YOU_MADE actually work?'.
Pre-AI, it could be taken for granted that if someone was skilled enough to write complex code/documentation then they have a sound understanding of how it works. But that's no longer true. It only takes 5 minutes of questioning to figure out if they know their stuff or not. It's just that managers aren't asking (or perhaps aren't skilled enough to judge the answers).
On the issue of over-enthusiasm from upper management, this may be only temporary since it makes sense to try lots of new ideas (even the crazy ones) at the start of a technological revolution. After a while it will become clearer where the gains are and the wasteful ideas will be nixed.
"Claude please tell me how $THING_YOU_MADE works in easy to understand language so I can explain it to my manager."
Memorise that and there you go. If the manager doesn't know how it works and has to trust the engineer, what are the chances that a memorised articulate explanation will satisfy them?
The issue (like most corpo issues) is one of incentives. Everyone's incentivised to do more work more quickly for a cheaper price. It's very fast to generate output but very slow to properly vet it.
What could change the current dynamics is if generation becomes way more expensive. Maybe that will happen because the token economy starts being subsidised? Maybe someone will eventually establish a monopoly on the agentic coding market and will start squeezing companies dependent on them?
Maybe this means AI has democratized Death Marches.
This article only talks about beginners digging a hole for themselves.
Doesn't mention the speedup that experts get.
I'm my post 12 years as a corporate trainer, I've worked with lots of companies, teaching how to code, collaborate, and what makes code good. I've also used AI a lot and can use it to quickly write code better than 95% of software engineers. (Sample size one disclaimer)
> Also, those that claimed this article is ironically a casualty of it’s own complaint are 100% right, Kudos.
Why would the article be a casualty of its own complaint?
> Why would the article be a casualty of its own complaint?
The "Disclaimer" section was added after the initial publication according to the Wayback Machine.[0]
[0] https://web.archive.org/web/20260506162056/https://nooneshap...
I’ve not seen a cohesive statement on what the world looks like when LLMs can do work perfectly (which on a long enough timeline is coming).
Do Google/ Anthropic / OpenAI capture all value, do clients still want consultancies, if the client wants something that a human would use to do something does that project hold any value in an LLM dominant world, why even bother.
> Schemes were all wrong
Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.
I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.
But it missed the opportunity to discuss how things need to change because of the disruption of AI, instead trying to find a way back to paper shuffling.
The writer could have explored ideas on how to manage quality using AI.
The entire article resonates, but that particular passage get at the core of a lot of my current frustrations around the use of these systems. Great article!
One day, 100 orders come in for you to update. The next day, you get 50 orders to update. Did your productivity just get cut in half? If you get 200 orders on the third day, did you just quadruple your productivity from the previous day?
AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.
IYKYK
> time wasted using AI on tasks that did not need it, on artifacts no one will read, on processes that exist only because the tool made it cheap to construct them. On decks that spell out things that previously didn’t even need to be said or were assumed.
I work at MSFT and at-least in my org, this is happening at warp speed. Every document I read, my first thoughts are what is the kernel of the idea that the writer was trying to convey ? Because 95% of the content of the doc is just verbiage. You can always tell its verbiage, the em-dashes, the rhythmic text, the green check mark emoji etc. We are hoping that volume of output will make up for the quality or lack thereof. More markdown files, more AGENTS.md file but is that making us better developers ? It certainly is giving the illusion that we are faster but I don't know how management thinks this will lead to tangible impact on the top line or bottom line.
In my experience, some of the best writing (in design docs and PM specs) at MSFT have been human written. You can see the clarity of purpose from the writer, ithere is no need to read it again, it is equivalent to having a 1-on-1 with the writer themselves. But AI written slop, the less said the better.
This piece hits home, I wonder how the experience is at other Big Tech companies.
I work in an "AI-first" startup. Being "The Expert", my work has become 90% reviewing the tons of crap that confident BD people now produce, pretending to understand stuff that has never been their domain, proudly showing off their 20-pages hallucinated docs in the general chat as the achievement of their life.
"Heads up folks, I wrote this doc! @OP can you review for accuracy and tone pls?"
And don't hit me with the smartass "just say no", it's not an option. I tried that initially. I have a pretty senior position in the org, I complained to the CTO which I report to, and with the BD managers as well, that I do not have bandwidth to review AI-produced crap. After a couple of weeks, CEO and leadership in an org call spelling out loud that "we should collaborate and embrace AI in all our workflows, or we will be left behind". They even issued requirements to write a weekly report about "how AI improved my productivity at work this week". Luckily I am senior enough to afford ignoring these asks, but I feel bad to all my younger colleagues, which are basically forced to train their replacements. I am not even sure at this point whether this is all part of the nefarious corporate MBA "we can get finally rid of employees" wet dream, whether it's just virtue-signalling to investors, or if CEO and friends genuinely believe their own words. I have the feeling leadership (not only in my org) has gone in AI-autopilot mode and just disappeared to the sunny tropical beaches they always wanted to belong to.
I would happily find another workplace at this point, but you know how the market is right now, and anyway, I have the feeling that this shit is happening pretty much anywhere money is.
Everyone feels smart now, and it's a curse.
God, how I hate this. It's making my life miserable.
This quote from the original blog post resonates with me:
> The room had been arranged in such a way that saying so was not a contribution; his managers were too invested in the appearance of momentum to want the appearance disturbed.
Yes, I know, I should learn to be more subtle. I just don't have the energy for this stuff. I am tired.
And the worst offenders are those insisting this isn't the case.
If I’m having it do stuff I’m unfamiliar with, it does tend to do better than I would or steer me at least in a direction I can be more informed about making decisions.
I, for one, welcome the new paradigm shift of vibe coders entering the field. I still think I have a competitive advantage with my 30+ years of coding experience, but I don't think it's wrong for vibe coders to enter my turf. I think value of code is rapidly asymptotically to ZERO. Code has no value anymore. It doesn't matter if it's slop as long as it works. If you are one of the ones that believes that all code written by humans is sacred and infallible, you probably don't have a lot of experience working in many companies. Most human code is garbage anyway. If it's AI-generated, at least it's based on better best principles and if it's really bad you just need to reprompt it or wait for a newer version of the AI and it will automatically get better.
THIS IS THE NEW PARADIGM. THINKING YOU HAVE ANY POWER TO SWAY THE FUTURE AWAY FROM THIS PATH IS FOOLISH.
I'm currently running a migration program at work and it turns out there's a 10 MB limit to the number of entries I can batch over at one time. At first I asked AI to copy 10 rows per batch but that was too slow. Then I asked it to change the code to do 400 rows per batch but sometimes it failed because it exceeded the 10 MB limit. Then I said just collect the number of rows until you get 10 MB and then send it off. This is working perfectly and now I'm running it without any hitches so far. Then I asked it to add an estimate to how long it would take to finish after every batch, including end time.
I really love this new world we're living in with AI coding. Sure this could have been done by someone without experience, but at least for right now the ideas I can come up with are much better than those without any experience, and that's hopefully the edge that keeps me employed. But whatever the new normal is, I'm ready to adapt.
that isn't to say an llm can't be useful but your post implies it's inevitable that llms will replace humans entirely from writing code, which i think is incredibly optimistic at best.
that said we will see!
It just sounds like you work on very low stakes software, probably CRUD apps if I had to guess. But software can be a lot more than. If written competently it can make decisions and do calculations that have real consequences.
I agree with most of what you said, but that statement doesn't take the time dimension into account. Slop accumulates, and eventually becomes unmanagable. We need to teach AI to become lean engineers too.
No surprise! Do you rememeber agile? Sometimes it was pragmatically applied towards efficiency, sometimes it became a bullshit religion full of priest and ceremonies. And on i could go, with more examples, the gist stays the same : new tools, speed increase, faster crash or faster travel depends on the trajectory the company/team/project/thing was already on.
A special note on "People who cannot write code are building software." "Fuck yeah" to that! Devs has shipped bad software to people in other departements/domains, for ages. They would never build something better if what they had was good in the first place.
When we (coders/startups) were doing it it was "innovation", now is "elephants in the china shop"? And this is not a rethorical snappy question: that IS innovation, instead of critizing the "wrong schema" ... understand the idea, help build it and do the job: ship code that works and is safe.
Also, grey-beard here, pls, don't think you can ever have a stable job especially when code is around. It keeps changing, it always has, it always will. AI bringing unprecedented changes is hype. The world always changed fast.
If "you" picked software development because of salary, you are in danger. If you did it because you love it, then tell me with a straight face this is not one of the best moments to be alive.
This is not new. This happened with every new technology or paradigm change. The old norms take a while to adapt to the new world and it involves some pain, emitting writings like this one.
Impersonation by using abilities that are not biologically their own, has been the strategy of dominance for human race. Horse-riding knights with bows and arrows dominated other humans that didn't have horse or arrows.
What are you complaining about? Quality of the software produced? Quality of objectives? Here is the truth. None of that is the root goal. You need to change your assumptions and norms and root goals.
Your horse riding analogy, is like riding a horse into battle without your weapon because it’s slowing you down. Sure you got through the enemy first by outmanoeuvring, but you missed the point all together. Maybe you got a shiny medal but all your mates are dead.
I would not want to work anywhere where that is the only goal, even at the employee level. Maximizing profits is not very popular at the moment, for good reason, look at what it's done to the world.
Even for opensource, the quality and performance are desirable aspects only because success of that opensource is directly tied to it's usage in profit-oriented products.
This probably is just culturally different understanding of the phrase, because US corporations indeed feel to act greedy, and there is no similar level of protection of the employees.
However, the thing is, in the long term, the business has to make profits to be sustainable. If the company does not make profits, it will die. Its the short term thinking that breaks down companies. You can maximize profits and be ethical at the same time, if the goal is to do it in the long term.
I do understand that the "maximizing profit for the corporation" is a synonym often for short term thinking and vulture capitalism, but for me it meant something else. This is actually quite fascinating now that I think of it, because this phrase means completely different things in different cultural contexts.
So I guess the trigger is that "maximize short term profits over long term sustainability" is the kind of company where I'd never work for.
This screams gate keeping.
My self and many who are experts in their fields were never formally trained. The titans that built the world of software we have today were mostly untrained in this specific field.
I think the article completely misses the point. If the artifacts created solve a problem then who cares who wrote the prompt or wrote the code.
Software ha changing, and holding in to old notions, titles, processes and rules to keep your status , title and importance seems silly.
I'm finding it difficult to agree on document creation now being zero cost whereas consumption is high cost. I think you can actually spend time giving AI enough context to consume docs for you.
I think the other thing worth pointing out with the article is understanding what your company will recognise. Yes, it's totally correct that your company won't thank you for poopoo-ing the idiot with AI. Yes, they'll run into a buzz saw when they hit a stakeholder who can choose to buy in. Don't burn your career protecting theirs. In fact it's not even certain that the idiot is damaging their career (for many reasons).
This was a really interesting article.
He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.
Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.
He ended up promoted to management.
Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.
I've got recent experience in exactly this - someone who is completely out of their depth, mis-representing their actual capabilities. Their reliance on AI is so strong because of this lack of depth - to such a degree that they never learn anything. Lately they've been creating drama and endless discussions about dumb things to a) try to appear like they have strong opinions, and b) to filabust the time so they don't have to talk about important things related to their work output.
I bet, with such qualities he is VP by now.
They want to maintain their status and position in the world, while lowering the value of the actual experts in the world and like this article says, feel confident in their impersonations of them.
I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.
I don't know what the solution is here. :(
The skin in the game one, in particular, is something I've been thinking about. People have been telling me LLMs are "more intelligent" than "average people". But it's easy to sound intelligent when you have no skin in the game. People have to stand by their word and suffer the consequences of their actions. It's not enough just to sound intelligent.
It seems appropriate also to share an anecdote of an incident that recently happened in my job. A colleague submitted some code for review, quite a lot of it. A second colleague reviewed and questioned a piece of code. Rather than answer the question with a justification, the question was taken rhetorically and the code was removed. The code then failed in production because the removed code was, in fact, necessary. The LLM obviously "knew" this, but neither colleague did. It's leading me to introduce a "no rhetorical questions in code review" rule. The submitter must be able to justify every line of code they submit.
---
> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.
AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.
The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.
Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.
---
> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?
This is one much-needed point to raise.
I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?
I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.
Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.
Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.
(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)
---
> The slowness was not a tax on the real work; the slowness was the real work.
Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.
Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.
I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.
The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.
The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!
Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.
Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).
In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.
Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.