What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=3808168 - April 2012 (3 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1982332 - Dec 2010 (14 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1746797 - Oct 2010 (2 comments)
Weekend project: What Every Programmer Should Know About FP Arithmetic - https://news.ycombinator.com/item?id=1257610 - April 2010 (9 comments)
What every computer scientist should know about floating-point arithmetic - https://news.ycombinator.com/item?id=687604 - July 2009 (2 comments)
The five key ideas from that book, enumerated by the author:
(1) the purpose of computing is insight, not numbers
(2) study families and relationships of methods, not individual algorithms
(3) roundoff error
(4) truncation error
(5) instability
I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.
For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.
For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.
[0] https://www-sop.inria.fr/indes/fp/Bigloo/doc/r5rs-9.html#Num...
[1] https://www.deinprogramm.de/sperber/papers/numerical-tower.p...
> 0.1 + 0.1 + 0.1 == 0.3
False
I always tell my students that if they (might) have a float, and are using the `==` operator, they're doing something wrong. 0x1.999999999999ap-4 ("0.1")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x3.3333333333334p-4 ("0.2")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x4.cccccccccccf0p-4 ("0.30000000000000004")
!=0x4.cccccccccccccp-4 ("0.3")[1]: https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_09_weird_f... (division result matrix)
Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.
You should be using == for floats when they're actually equal. 0.1 just isn't an actual number.
> 1.25 * 0.1
0.1250000000000000069388939039A finitist computer scientists only accepts those numbers as real that can be expressed exactly in finite base-two floating point?
double m_D{}; [...]
if (m_D == 0) somethingNeedsInstantiation();
can avoid having to carry around, set and check some extra m_HasValueBeenSet booleans.Of course, it might not be something you want to overload beginner programmers with.