> Same for arbitrary-precision calculations like big rationals. That just gives you as much precision as your computer can fit in memory. You will still run out of precision, later rather then sooner.
Oh, absolutely. This actually shows that floats are (in some sense) more rigorous than more idealised mathematical approaches, because they explicitly deal with finite memory.
Oh, I remembered! There's also interval arithmetic, and variants of it like affine arithmetic. At least you know when you're losing precision. Why don't these get used more? These seem more ideal, somehow.