When you're doing something like pi + sqrt(2) ≈ 3.14159 + 1.41421 = 4.5558, you're taking known good approximations of these two real numbers and adding them up. The heavy lifting was done over thousands of years to produce these good approximations. It's not the arithmetic on the decimal representations that's doing the heavyh lifting, it's the algorithms needed to produce these good approximations in the first place that are the magic here.
And it would be just as easy to compute this if I told you that pi ≈ 314159/100000, and sqrt(2) ≈ 141421/100000, so that their sum is 455580/100000, which is clearly larger than 4553/1000.
I'm curious if they had a better one that we don't know of yet—their best known approximation of sqrt(2) is significantly more accurate.
3.1415<pi<3.1416 and 1.4142<sqrt(2)<1.4143, => 4.5557<pi + sqrt(2)<4.5559
=> 4.553 < 4.5557 < pi + sqrt(2) => 4.553 < pi + sqrt(2)I prefer comparing it to complex numbers where I can't have "i" apples but I can calculate the phase difference between 2 power supplies in a circuit using such notation.
Nobody really cares about the 3rd decimal place when taking about a speeding car at a turn but they do when talking about electrons in an accelerator, so accuracy and precision always feel mucky to talk about when dealing with irrationals (again my opinion).
What? The opposite is the case. Anything you want to do something with, you can only measure inaccurately; arithmetic doesn't have any use if you can't apply it to inaccurate measurements. That's what we use it for!
Catastrophic cancellation and other failures are serious issues to consider when doing numerical analysis and can often be avoided completely by using symbolic calculation instead. You can easily end up with wrong results, especially when composing calculations. This would make it difficult to, for example, match your theoretical model against actual measurement results; particularly if the model includes expressions that don't have closed-form solutions.