Yeah I doubt this is anything novel in the purely mathematical realm. It sounds like what's patented is a practical design for doing this on a chip.
Not sure if you were taught a different method, but I envision this being similar to counting "significant digits" in scientific notation, and it sounds like that's very similar to the approach he took. I wonder if that explains there statements about tracking to the last "digit". They obviously can't do that for irrational number, so maybe they mean the last "significant" digit as far as the underlying floating point implementation is concerned.