Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
They haven't even learned that "less code is better" yet, I wouldn't hold my breathe waiting for them to suddenly learn "more advanced" things like that before they learn the basics.
I would argue that any sufficiently large system reaches a point where more code is in fact the opposite of what it needs.
Nutrition and calories are only useful up-to a point and then we have diminishing and later on negative returns.
Even-tough it is not the best analogy because we are describing two different system, it helps put a mental model around the fact that churning more is often less.
Side Note: A got a feedback from a customer today that while our documentation is complete and very detailed, they find it to be too overwhelming. It turns out having a few bullet points to get the idea across it better than 5 page document. Now it is obvious.
Tl;dr's, quick references / QuickStarts / cheat sheets and FAQs are also some things they're great at generating.
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
SAFe is poison.
AI/LLMs aren't innovation the way TCP/IP, linux, or postgres were. To be clear: claude/codex/gemini/grok/whatever exist for profit, to squeeze the last drop of productivity out of you until there's nothing left, and then you're disposable (laid off).
If you like AI, use open source models, use them in your side projects.
2) It's been a hard lesson for me to learn because I'm naturally a contrarian, but you are hired to do what management wants you to do. If you resist, your best bet is to hope they don't notice or care, but it's not going to change much.
Now of course, you may think you are such a good engineer that companies will kill for you… perhaps that’s true now, but its not true for 90% of the engineers out there. And as the pool of engineers gets reduced, the chances of you being not as good as you thought go up. So the real question is: can you (we all) still make a good living by not using llms. You know support each other and fuck the higher ups? No, we cant. Wwe are full of ourselves, full of elitism (this is HN). We are rational folks, we believe in numbers, in data; we know what we deserve. fuck the rest. The ones who win are the higher ups, of course, not us.
To me, it's pretty simple. I have things to do. This makes it easier for me to do those things. Sometimes that means I can do more things, and sometimes it means I can spend less time on my work, and often both.
I have no idea what the future will hold. But to me, it would be very odd to avoid using extremely useful tools for my current work, because of that uncertainty about the future.
Is this a thing? Are there companies out there that don't want to go faster?
There's still an opportunity for engineers to eat their bosses lunch and just start their own company. It's never been easier to start a lower cost competitor.
Employment isn't a social law of nature: it's a transaction of money for "units of work", just like the business might have with other vendors. Governments should be making it easier to become a vendor.
The juniors are eliminated and the seniors indulge in cognitive surrender because it feels good.
The revealed preference is very far in the opposite direction at the moment.
Serious question. I think the reason that there's such a disconnect among AI-for-work users about whether it's a panacea or bullshit accelerator is that different software developers have massively different duties and conceptions about what their job is or should be.
2. Figure out what components already exist and what new things we need to build and how things should be integrated.
3. Actually build the things according to what was figured out in the previous step.
4. Review my own work and other people's work.
5. Release things and make sure they work.
6. Respond to emergent issues in things that have been released.
I find the current generation of AI tooling to be very useful for all of these tasks. Less so for task #1 than the others.
What are other people doing that is different?
Anyone remember what SCO did to the industry as it went under?
The part I still don’t get is where Enterprises are dumping internal ‘secrets’ (code, processes, customer needs, internal politics, leadership dreams), into the hands of startups and untrustworthy conglomerates. MS used to be famous for NDA and deal abuse.
I don’t believe for a second the LLM giants would be shy about training on corporate materials and lying about it. And if they start going under? This gold rush might have a long, ugly, tail.
Quite honestly the firings that are happening are the ones who are not adopting the technologies, if you're doing this you're quite literally just putting yourself in scope.
Just read coinbase today. They are culling those who are not adopting the future because they get in the way of progress. They don't help, they don't push things forward and they hold back those who do.
And there is why the hate exists. You as the CEO know nothing about how your business works. You neither actually try to understand nor do you have the technical background to understand. So you substitute gamed numbers. And in doing this, you setup your company to tank the industry that props up the world economy. And then act like you are the rational one while doing this. There is nothing rational about how most CEOs act. There is a reason why companies do better under dev founders than any other circumstance. There is a reason why dev CEOs do better than non-dev CEOs. Yet despite this, you will tank both your company and a substantial part of the industry just so you can get yours. That's why you are getting the hate. Ignorant indifference is just as objectionable as the caricature of a CEO you see in these posts.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
If developers are worried about their jobs with the way the market currently is, they should treat their personal workflows as trade secrets. My example was not specific to AI, but it applies just as much to AI workflows. In a worker's market, it was sometimes fun to share that kind of knowledge with an organization. In an employer's market, they can pay me if they want access to my personal choices.
That sounds like a toxic environment. Sharing those types of things is how I got the recognition to get ahead in my career and I have never once regretted it.
So while it might be nice to say I won't share, boss-man can certainly make it so I must share.
Boss-man actually has a very difficult time turning legal theoretic right into actual deliverables.
But when I've been stuck for a while in a dysfunctional team, I've definitely seen the flip side where other people will find ways to take a lot of credit for minor iterations on my work, where management will reward my productivity with high expectations and high pressure to continue the trajectory they perceive in a single idea, and when the tool becomes a support burden because too many people think it should solve all of their other problems too and I'm now perceived as being the owner of this thing they depend on.
If your only goal is to maintain a performance lead on your peers, you either need to gain and keep an advantage or find ways to actively make your coworkers disadvantaged (or both). And if you're already doing 1) then 2) isn't a far stretch.
> would you like to work on a team full of people like you?
If their team is already like this, what choice do they have? It's a prisoners dilemma where everyone else is defecting and I'm the sole cooperator.
IMO the onus for solving this is on the business owner, either through establishing a knowledge sharing culture or more comprehensive performance evaluation that rewards these innovations.
Nice passive aggressive dig!
I mean, according to your employment agreement, that code is owned by your employer, since you wrote it as an employee for use at work. They could easily demand that you share it, if they knew it existed.
This just illustrates that smart people figure out their own productivity/time-saving shortcuts at work, and little scripts and tools like this are part of it. Happens all the time. Other employees don't, and just plod through whatever manual process they were trained to do.
And I'm not a "at work we're a family!" guy, but I wish we could just be excellent at our jobs and share it with each other without worrying if I'm digging my own grave.
If your employer is expecting that you selflessly share your time for free, you’re getting fucked. Most people are paid to do their job. They are, of course, then expected to work for their employers while on the clock.
What I find strange about this is that in 2020 nobody would be this openly cynical and selfish about, say, good Python idioms, a useful emacs configuration, git shortcuts, etc. This attitude of "your job is to deliver value for the customer, anything else is a distraction, and if you share your hard-earned value-delivery techniques with others then you are a sucker" - this is new, and very disconcerting.
I understand there's not much we can do to stop the cyberpunk dystopia, but do we have to leap in head-first?
If they gave immediate raises or bonuses for stuff like this, then things would change.
None of it is actually that crazy that everyone else could think up.
What I've noticed in my own experience here is that even when I do share my own prompts/skills few people use them (or alternatively they were so basic that everyone already had their own version).
e.g. If someone doesn't care about xyz before AI, they probably won't after AI even if I serve them it on a silver platter.
Does that person rationally go find more work to take on with that reclaimed time? Probably not unless it's their company or exceptional motivating circumstances exist.
yet I don't see anyone question whether management will be just as excited to see that less work is needed and that it'd just result in layoffs
Contrast to remote work where the benefit was extended to all regardless of performance, thus becoming a large target for management to cut.
I think the talk about management & capital demanding ROI will be the inflection point to watch, as a downstream effect could be AI haves & have-nots, depending on open weight models' competitiveness and local capability relative to the SOTA models.
At what point is inspiration and thought just devalued and worthless in the name of doing things instantly. The work has no soul.
It really comes into its own when you treat it as a tool that can build other tools. For example, having it build tools that force it to keep going until its work reaches a certain quality, or runs compliance checks on its outputs and tells it where it needs to fix things. Then and only then, can you trust its work.
Right now most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
We are definitely struggling with the same issues author describes, but even worse the leaders down at the Crowd level have some perverse need to achieve reuse across their teams, rather than letting their Crowd experiment. One team does something interesting, we must stop and get that thing out to all teams in that group, so everyone “benefits”. This is a scarcity mindset, which made sense pre-AI where code was costly and ideas were more valuable.
At the same time, everyone not only has to do their work, they need to be 25% more efficient from AI (new KPIs), and so their own learnings slow to a halt, and the team with the cool idea has to give presentations instead of hacking.
The CEO has a youtube style platinum token plaque for their office.
The bias in the assumptions here is absolutely bonkers.
Problem: GenAI is not generating any visible return on investment.
"Solution": rearrange your entire development organization around the technology and start inventing new tooling.
What's entirely obvious is that the point of such articles is not the stuff they purportedly discuss, but the normalization of assumptions those discussions are based on.
But the internet was a simpler concept for businesses. Basically it was you can now sell to people from their computers. AI’s promise is what? It can approximate reasoning about things? This is much more challenging implementation puzzle to truly solve.
I don’t know that I’ve seen anything of real substance outside coding tasks yet.
I propose employees create self-training byproducts as a result of any AI interaction. And then they also work with their Cuban manager to make sure that these self-training byproducts are a part of their growth plan. This can guarantee growth without losing that opportunity To interact with the intelligent AI system (on topics that are relevant to the company's short, mid, and long-term strategic advantage,).
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
Debugging and developing first fixes is also one of the spaces where current LLMs are the biggest force multipliers. Especially if you have reproduction cases the LLM can test on its own
But long-term it might look very different as more and more of the code becomes LLM written
It already has; ship has sailed.
https://blog.pragmaticengineer.com/the-pulse-tokenmaxxing-as...
I'm staunchly pro-AI as a technology, but I do think the bubble is going to pop in the next year or two just because the business value won't materialize for most companies fast enough.
AI content has a look and feel people sense immediately.
It’s amazing to see how quickly things shifted from “wow this is so cool, AI is going to change everything” to folks calling out “you lazy bum, this just looks like some slop you threw together with AI… let’s get some real thinking please.”
We are firmly heading into “trough of disillusionment” territory on the hype cycle.
The more I use AI, the more I see mistakes. I've noticed others see these same mistakes, correct them, then when queried say "Oh, it gets it right all of the time!". No, having to point out "you got this wrong, re-write that last bit" isn't "getting it right". And it's not that the code is wrong overtly, it's subtle. Not using a function correctly, not passing something through it should (and the default happens to just work -- during testing), and more. LLMs are great at subtle bugs.
So moving forward with this isolation you mention, ensures that maybe the guy in the company, the 'answer guy' about a thing, never actually appears. Maybe, he doesn't even get to know his own code well enough to be the answer guy.
And so when an LLM writes a weird routine, instead of being able to say "No, re-write that last bit", you'll have to shrug and say "the code looks fine, right?", because you, and the answer guy, if he exists, don't know the code well enough to see the subtle mistakes.
AI can get a pretty good picture, near instantly, whenever you need it.
It’s not just competent-sounding, it is reasonably competent, and certainly very useful for tasks like that.
Gone are the days of mandatory corporate "synergy" and after-work bar gatherings to promote "team building."
AI is showing people in the tech industry that they're just interchangeable cogs. AI is bringing the offshored Indian work environment to Silicon Valley.
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
Our mental models of developments like the industrial revolution, literacy, printing or suchlike tend to be a lot more straightforward than how things play out in practice.
When a bottleneck is eliminated... you tend to shortly find the next bottleneck.
Meanwhile, there is an underlying assumption everyone seems to make that "more software, more value" is the basic reality. But... I'm skeptical.
To do lists, wishlists, buglists and road maps may be full of stuff but...
Visa or Salesforce have already exploited all their immediate "more software, more money" opportunities.
The ones in a position to easily leverage AI are upstarts. They're starting with nothing. No code. No features. No software. With Ai, presumably, they can produce more software and make value.
Also... I think overextended market rationalism leads people to see everything as an industrial revolution...which irl is much more of an exception.
The networked personal computing revokution put a pc one every desk. It digitized everything. Do we have way better administration for less cost? Not really. Most administrations have grown.
Did law fundamentally change dues to dugital efficiency? No. Not really.
If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.
It's possible capitalism will drive all enterprise to terrible codebases.
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The title is just clickbait. The rest of the content are fluffy bunnies and rainbows. It's all summed up as "continue to consume product, but remember to also do X". Sales copy + HBR MBA bait.
The closest thing to an honest, less-than-rosy example is the "junior person" who has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
Fear not: he has a place to feel welcome and included!
https://www.newsweek.com/inside-world-first-ai-dating-cafe-1...
While AI tools have been provided pretty quickly (over a year ago, I initially used gemini cli, then copilot once it added anthropic models) the management is absolutely clueless about it.
The top wants agents. Every team is asked few times a week "what autonomous agents will you build next" and answers the current AI lacks agency required not to mess up critical long running tasks and generate even more work are falling on deaf ears.
(also ideas such as, why don't we setup a wiki page were teams can post their repetitive tasks and we can use AI to script them, are considered "not fast enough" - just build it... but we are the automation team, we automated everything we do years ago :-)
Middle managers on the other hand suddenly started giving juniors senior's work and asking seniors "tell them (juniors) how to prompt it".
Seriously? How about I prompt it myself instead? Oh, but it makes a shit load of architectural errors and boobie traps the junior will fail to find... So now instead of a cursory glance I have to spend an hour reviewing a small PR from them.
And any questions about "why are you creating a new X for this instead of extending the existing one?" are met with blank panicked stares...
The essence of this BS is contained in my description of the recent "Copilot Review" incident.
We sometimes merge the same Github workflow files (10 line files) to dozens of repos, we have to obtain approvals for the PRs from a bunch of teams working in different timezones, but the merge has to be done everywhere at once and it has to be coordinated with other work.
As we were on a day of such task some "helpful hand" enabled Copilot PR reviews for the whole org.
Copilot helpfully opened 7 or 8 discussions on each PR giving us such precious advice as "your concurrency group uses the commit sha as a differentiating factor, this will allow multiple runs to proceed concurrently" to which one is tempted to answer "no shit sherlock".
We suddenly had almost 200 conversations to "resolve" an hour before the merge and a bunch of approvers didn't give their approvals because "there is a discussion".
Thankfully we had copilot that wrote us a script in 5 minutes to resolve the problem caused by itself...
Maybe our next overnight agent can go over all our open PRs and close Copilot Review conversations with appropriate messages?
Not a problem if the hired "AI" now does that job. /i