Partly because that's often not what we're supposed to do; the stuff under the hood "just works" and we're meant to use it to write features, not worry about optimising the stuff that happens under the hood.
And partly it's because the stuff under the hood is increasingly weird and bizarre. Branch prediction is weird, and I still don't understand why that extra print statement changes the branch prediction. Why does it predict `v > maxV` is true when the alternative is to print something, but it doesn't predict that when the alternative is to do nothing?
Is it because printing is expensive, and therefore the branch predictor is going to strongly prefer avoiding that? It's weird that we'd basically have to deceive our code into compiling into a more performant form.
I don't want to have to second guess the compiler.