Sure. Let's take that idea:
> reduce vectors for bugs (which are always based on misunderstandings)
Is that really impossible to measure? (For cheap that is. Cheap measure, cheap estimate, cheap confidence). Is that really impossible to monitor?
Grab a random sample of 100 fixed bugs in the past 2 years. Go through them one by one. How many do you really seriously think would have been avoided? If it's not much more work, give it some weighting on confidence and impact or something. For how much addl work? Once you notice what you could count, restart from the top (re-randomize?) - it's only 100 bugs. Is it really 100%, or is it now that are looking at the data more like 10% at best? Was it really impossible to get data?
Now that you have an estimate, write it up and circulate it - It's risky: you could be volunteered to fix the problem and maybe you don't want that.
Would it really be impossible to monitor over the next year? (Still cheap data, cheap results - except if you really estimated 100% because now you were able to get real budget - even if too small - to attack it.)
Estimates, targets, budgets, deadlines are all different concepts. A fraught but carefully worked up estimate is rarely impossible.
Entire businesses get founded and funded on "impossible" estimates.