I don't think amount of software is what determines whether a company does well.
I don't think capturing quantity of context is that important either.
Now, quality of context. How well do the humans reason?
Then, attitude. How well do the humans respond to bad situations?
Then, resource management. How well does the company treat people and money?
Finally, luck. How much of the uncontrollables are in our favor?
Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
The bottleneck for making software applications better at being used by (non-software) businesses is making sure the software does all the software things that actually benefit the business. Save time. Make humans more productive. Reduce human error. Make the business more efficient. Increase profit margins.
All of those things are a bit difficult to predict and quantify. You start with ideas of what might help the business, you maybe design, prototype, trial. Ultimately you build or enhance software applications, and try to measure how well they're making the business better.
In all of this, making sure software is addressing the right problem in the right way, and ultimately making the business better - that's a hard problem! Regardless of how fast and easy it is to make software.
But yes, the speed can really help. You can prototype and trial and improve the feedback loop.
Based on what I’ve seen, prototyping has been always easy. You don’t even have to build software for the first iteration. For UI stuff you can use a wire-framing tool.
What has happened is that we abandoned the faster iteration methods (design think tank, quick demo and UX research,…) and we have full in on building the first idea that came in and fostering it on the users. That process is very slow and more often goes wrong.
Tsunami?
Code changes. Not necessarily features, but also bug fixes, plain old maintenance, and even refactoring to improve testability.
With AI coding assistants, what in the past were considered junior dev tasks are now implemented with a quick prompt and an agent working in the background.
These junior dev tasks are now effortlessly delivered by coding assistants, with barely any human intervention. Backlogs are cleared faster than new items are added. And new items are added more and more because capacity to clear them is no longer an issue. The challenge is now keeping up with the volume of changes. I see this first-hand at my org.
> Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.
Just because you can think of other bottlenecks that doesn't mean that generating code was not a bottleneck, and is not the bottleneck today. The mere notion of a backlog demonstrates that it is a bottleneck.
They can't all be equally important bottlenecks; a bottleneck is by definition a singular component or sub-system most-limiting to the system's output.
What are we trying to output from our businesses? Code?
What is this magical context floating around every business that will unlock AI agents to produce ... what?
[Edit] I apologize for my tone. You're right, dealing with the speed of code generation is an unprecedented problem. I was making the argument that it's not the most important to the business and that rate of code change is very rarely the top concern. But that does not mean it's not the most important problem for someone. For the developers dealing with the system, it is.
This is a pointless statement though. The fact that writing code is a bottleneck, and a critical one, doesn't mean it's the only thing standing between us and fixing/implementing something.
It's like downplaying the time taken by international flights, because people can spend time passing through security.
The truth of the matter is that all software development processes is built around how slow code is written. Now code is ceasing to become a bottleneck and the current software dev process starts to emerge as inadequate.
Totally depends on what kind of product and codebase.
Last time I checked, the number of open issues in Claude Code repo has increased.
And I have seen tons of tickets that are open for years. Not because it's technically hard or anything. An intern can do that. Those tickets are not closed because nobody wants to deal with what comes after it.
The Claude Code repo features bug reports that are a mishmash of complains about prompt output, backend responses, documentation updates, browser extensions, etc.
Still, during the last week the repository reports ~2k closed issues vs ~1.3k new issues.