Most often, yes; the probability distribution of your number inside that interval is not uniform, it is most likely very concentrated around a specific number inside the interval, not necessarily its center. After a few million iterations, the probability of the correct number being close to the boundary of the interval is smaller than the probability of all your atoms suddenly rearranging themselves into an exact copy of Julius Caesar. According to the laws of physics, this probability is strictly larger than zero. Would you think it "unsafe" to ignore the likelihood of this event? I'm sure you wouldn't, yet it is certainly possible. Just like the correct number being near the boundaries of interval arithmetic.
Meanwhile, the computation using classical floating point, typically produces a value that is effectively very close to the exact solution.
> It seems like you'd have to trust FP rounding to always cancel itself out in the long run instead of potentially accumulating more and more bias with each iteration. Is that the case?
The whole subject of numerical analysis deals with this very problem. It is extremely well known which kinds of algorithms can you trust and which are dangerous (the so-called ill-conditioned algorithms).