> reduce vectors for bugs (which are always based on misunderstandings)
Is that really impossible to measure? (For cheap that is. Cheap measure, cheap estimate, cheap confidence). Is that really impossible to monitor?
Grab a random sample of 100 fixed bugs in the past 2 years. Go through them one by one. How many do you really seriously think would have been avoided? If it's not much more work, give it some weighting on confidence and impact or something. For how much addl work? Once you notice what you could count, restart from the top (re-randomize?) - it's only 100 bugs. Is it really 100%, or is it now that are looking at the data more like 10% at best? Was it really impossible to get data?
Now that you have an estimate, write it up and circulate it - It's risky: you could be volunteered to fix the problem and maybe you don't want that.
Would it really be impossible to monitor over the next year? (Still cheap data, cheap results - except if you really estimated 100% because now you were able to get real budget - even if too small - to attack it.)
Estimates, targets, budgets, deadlines are all different concepts. A fraught but carefully worked up estimate is rarely impossible.
Entire businesses get founded and funded on "impossible" estimates.
Although to be clear, the estimates will "this is where we think that we will be saving money"
followed by a review (in 12 months time) that will be "this is what we think is the result, but it will include improvements from other vectors, such as better communication from business"
I'm not arguing that we shouldn't I'm arguing that the business cannot put a number on it that it can rely on. That's fine if the budget is available, but if there's no budget, almost always for reasons beyond the engineer's responsibility (VC money, market resizing, etc) then you're really struggling to be able to justify it.