Ok, but in the context of this thread, it is important. Remember that I'm replying to the statement "For added assurance, just round to the nearest cent after each operation." That is misleading advice and the behavior of floats is just context for why.
First of all, a lot of languages don't include arbitrary rounding in their math libraries at all, only having rounding to integers. Second, in the docs of Python, which does have arbitrary rounding, it specifically says:
Note: The behavior of round() for floats can be surprising: for example, round(2.675, 2) gives 2.67 instead of the expected 2.68. [...]
Thus I think what I said stands: you cannot round to nearest cent reliably
all the time, assuming cent means 0.01. The only rounding you can sort of trust is
display rounding because it actually happens after converting to base 10. It's why 2.675 will print as 2.675 in Python even though it won't round as you'd expect. But you'd only do that once at the end of a chain of operations.
In a lot of cases, errors like these don't matter, but the key point is that if the errors don't matter, then they don't need to be "assured" away by dubious rounding either.