No, it used to be the case that there were architectures where signed integers were represented as 1s complement, so portable code could not rely on signed integer overflow wrapping around (there would either be 2 bit patterns representing zero, or sometimes the all-ones pattern had special meaning, like a trap).
Using this type of UB is a "relatively" new thing (GCC started doing it in the 00s, which broke a lot of stuff in the Linux world, IIRC).
It _is_ true that somebody did the research (can't find the source right now) and found that defining signed integer overflow as wrapping did indeed make some code run slower. I'm skeptical that it matters.