But why would responsible AI users -- actual engineers using it to accelerate grunt work, not vibe coders -- not use the AI tooling to increase their capacity to do all of the work it takes to avoid breaking things while still moving fast, relatively speaking?
Testing a new incremental feature against the entire extant codebase, not just the bits of it that they had the bandwidth to tackle within the deadline, seems like exactly the sort of thing well-disciplined engineering teams would use AI to do.
For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.
So you don't "test as much of the codebase as you have time for", you write tests for your code and the interface between it and other systems. Maybe integration or FE tests depending on what you have.
So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.
Also, LLMs don't make mistakes like humans do. They fuck up in weird unpredictable ways that mean you kinda have to treat them like a hostile adversary trying to sneak in subtle backdoors. It slows things down.
Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.
Right, all of this goes without saying.
> So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.
Read "whole codebase" as a qualitative descriptor, not a quantitative one, where the codebase as deployed is the reference point for the overall business logic flow, including where different processes interact with or block each other.
The point is to use the AI tools to ensure that new features and functionality are implemented in a way that is consistent with the technical and business constraints that emerge from the entire tech stack, precisely so that adding something new in context A doesn't break functionality in context B.
> Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.
Yes, that's true, but those are the perennial challenges in doing solution design before you even get to implementation. The "break things" problem happens when solutions for context-bound problems are being implemented in ways that lead out of their context -- narrowly-focused goals are often pursued in ways that create externalizes elsewhere in the organization precisely because of the limited focus available to the people working on solving for those goals.
The point here is that there's now the possibility for the AI tools to make much broader contextual awareness available as part of the solution design and implementation phase of every narrowly-targeted goal. If the AI has a reasonably accurate model of how systems and solutions affect each other in the broader organization, it can offer predictions of how any proposed new feature might impact all of the other business functions, and do so in near real time, and ultimately head off 90% of "break things" impacts that you'd otherwise need multiple series of meetings, testing sessions, buy-ins and sign-offs to avoid otherwise. That's what would get you moving fast with much less collateral damage.