It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.
And, crickets. In practice I haven't seen any efficiencies despite my teams using AI in their work. I am not seeing delivery coming in under estimates, work costs what it always cost, we're not doing more stuff or better stuff, and my margins are the same. The only difference I can see is that I've had to negotiate a crapton of contractual amendments to allow my teams to use AI in their work.
I still think it's only good for demos and getting a prototype up and running which is like 5% of any project. Most technical work in enterprise isn't going from zero to something, it's maintaining something, or extending a big, old thing. AI stinks at that (today). You startup people with clean slates may have a different perspective.
https://en.wikipedia.org/wiki/Stakhanovite_movement
>In 1988, the Soviet newspaper Komsomolskaya Pravda stated that the widely propagandized personal achievements of Stakhanov actually were puffery. The paper insisted that Stakhanov had used a number of helpers on support work, while the output was tallied for him alone.
Bloody hell. That feels like getting into borderline religious territory.
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.
Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"
Do you see? They cannot be wrong.
Before you make any decision, ask yourself: "Is this good for the company?"
People making the decisions are 5%, they delegate to managers who delegate to their teams and all the way down.
Decision makers (not the guy who thinks corner radius should be 12 instead of 16, obviously) want higher ROI and they see AI working for them for high level stuff.
At low level things are never sane.
Before AI it was offshore. Now it’s offshore with AI.
Prepare for chaos, the machine priests have thrown open the warp gate. May the Emperor have mercy for us.
It seems to me like too many yearly bonuses are tied to AI implementation, due to FOMO amongst C-levels. The hype trickles down to developers afraid that they won't get hired in the new AI economy.
I don't think there's a conspiracy, just a storm front of perverse incentives.
I'm not normally on LinkedIn but recently was and with the AI stuff the "look at me" spam around AI seems like an order of magnitude more absurd than usual.
I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.
But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?
That is where the AI come into full use.
Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.
My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.
About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.
This is abusive behavior aimed at generating a positive response the c suite can give to the board.
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Thanks for the career advice.
Fly away from here at high speed
If you're a low-level office drone, you are not a pilot.
When you say "AI cannot do my job, [insert whatever reason you find compelling]" Execs only hear "I am trying to protect my job from automation".
The executives have convinced themselves that the AI productivity benefits are real, and generally refuse to listen to any argument to the contrary. Especially from their own employees.
This impedes their ability to evaluate productivity data; If a worker fails to show productivity, it can't be that AI is bad, because that'd mean the executives are wrong about something. It must be that the employee is sabotaging our AI efforts.
After all if they weren't stupid and lazy they would be important execs, not unimportant workers
Executives don't care about any of that and just want to make the organization more efficient. They don't care at all if the net effect is reducing headcount. In fact, they want that -- smaller teams are easier to manage and cheaper to operate in nearly every way. From an executive's standpoint, they have nothing to lose: the absolute worst-case scenario is it ends up over-hyped and in the process of rolling it out they learned who's willing to attempt change and who's not. They'll then get rid of the latter people, as they won't want them on the team because of that personality trait, and if AI tooling is broadly useful they won't even bother backfilling.
Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.
Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.
The project was renaming the HR department ...
After that I sent all executive emails to a folder and did not read them, my mood improved drastically by not reading those emails.
This question applies whether it's written by an AI or not.
I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.
The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.
Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere.
This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere.
It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to.
You don’t even get the same ‘human’ with the same AI, as you can see with various prompting.
It’s like doing a lossy compression of an image, and then wondering why the color of a specific pixel isn’t quite right!
With the 70% you then pitch "I have this" and some Corp/VC will buyout the remaining 30%.
They then in return hire engineers who are willing to lap the 70% slop, and fix the rest with more AI slop.
Your brother dies happily achieving their dream of being a bazillionaire doing nothing more than typing a few sentences in a search bar.
"Explain to me in detail exactly how and why this works, or I'm not merging."
This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not.
> AI-generated slop his non-technical boss is generating
It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit).
https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article)
These articles are 25 years old.
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
Once you allow AI to replace the process, you kind of reveal that the process never mattered to you, if you want faster pace at the expense of other things, you don't need to pay for AI, just drop the unnecessary process.
I feel AI is now just a weird excuse, like you're pretending you have not lowered the quality and stopped writing proper documents, professional emails, full test suites, properly reviewed each other's code, no, you still do all this, just not you personally, it's "automated".
It's like cutting corners but being able to pretend like the corner isn't cut, because AI still fully takes the corner :p
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
You do have the option to spend your time elsewhere - if you can handle every NPC friend and family member thinking you've lost your mind when you quit that cushy corporate gig and go work a low status, low pay job in peace and quiet - something like a night time security guard.
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
New flow: please run RCA through chatgpt and forward to your manager who will run it through chat GPT and send to the customer.
RCA is now 10x longer, only 10% accurate, and took 3x longer to get to the customer.
I'm looking for a new job.
Even the OpenAI or Claude $200 plan doesn't give you enough tokens to make you truly productive. The true ROI measurement should be the token cost over your hourly wage times the time you had saved if you have to do it by hand
Paragraphs of content spread out over pages and pages of nothing. Laziness at enterprise scale.
Deep research reports are even worse. They will cite AI-generated content in their work, which makes verifying their sources an O(n^2) task since I, now, have to find the sources _those_ sources cited to find the truth.
This generation of AI is the worst thing to happen to humanity since social media.
The thing about companies asking for slop is that a middle manager maintaining usual stream of vacuous text is a proxy for that person paying attention to a given set of problems and alerting others. AI becomes a problem because someone can maintain a vacuous text stream without that attention.
So it's likely to become an arms-race.
And this is my real fear with the current crop of AI. That rather than improving the system to require less junk, we just use AI to generate junk and AI to parse AI-generated junk that another AI summarises ad infinitum…
Like, instead of building a framework that reduces boilerplate code, we use an LLM to generate that boilerplate instead (in a more complex and confusing manner than using a traditional template would).
Or, when dealing with legal or quasi-legal matters (like certifications), 1. Generate AI slop to fill a template 2. Use AI to parse that template and say whether it’s done 3. Use another to write a response.
Lots of paperwork happening near instantly, mostly just burning energy and not adding to either the body of knowledge or getting to a solution.
Now that ai makes my programming 10x more efficient, I will work 5x less destroying “half of my” productivity.
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.