The fact that the exponents tend to be small in the algorithms that our puny brains were able to produce does not convince me one bit that all problems in P have algorithms with small exponents.
Why are you so sure that P vs NP is important? Does it really relate to so many parts of computer science?
I do agree that complexity analysis is a very (perhaps the most) important part of CS. But that does not necessarily mean that P vs NP is a fundamental question.
RSA wouldn't break even if P = NP. Even if we came up with a polynomial algorithm for integer factorisation in could have a much higher exponent than the multiplication required to verify the result.
This is what bothers me the most, people saying that P = NP implies that solving a problem is as difficult as verifying a solution. That's simply not true: P = NP implies that solving a problem is as difficult as verifying a solution (modulo polynomial reductions).
The part about ignoring polynomial reductions seems very important to me. Otherwise I could also say that: "all even numbers are the same", when I actually mean: "all even numbers are the same (modulo 2)".
Would you care to convince me why we chose to ignore polynomial reductions and not some other complexity class?
The best explanation I see is the fact that different Turing machine formulations (single-tape, multi-tape, one-way infinite, two-way infinite, ...) and also the random access machines have polynomial reductions between them. But even this does not justify the importance we give to P vs NP in my eyes.