What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=3808168 - April 2012 (3 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1982332 - Dec 2010 (14 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1746797 - Oct 2010 (2 comments)
Weekend project: What Every Programmer Should Know About FP Arithmetic - https://news.ycombinator.com/item?id=1257610 - April 2010 (9 comments)
What every computer scientist should know about floating-point arithmetic - https://news.ycombinator.com/item?id=687604 - July 2009 (2 comments)
The five key ideas from that book, enumerated by the author:
(1) the purpose of computing is insight, not numbers
(2) study families and relationships of methods, not individual algorithms
(3) roundoff error
(4) truncation error
(5) instability
(From "An Essay on Numerical Methods" p 3 of the mentioned text; emphasis authors)
So many of us spend so much time getting enamoured with technical solutions to problems that no one cares about.
I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.
For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.
For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.
The gag here being that perhaps that isn’t the best dividing line for programming talent.
It's like slicing off the top 0.0001% of mt. Everest and saying that you have evenly split the world.
[0] https://www-sop.inria.fr/indes/fp/Bigloo/doc/r5rs-9.html#Num...
[1] https://www.deinprogramm.de/sperber/papers/numerical-tower.p...
> 0.1 + 0.1 + 0.1 == 0.3
False
I always tell my students that if they (might) have a float, and are using the `==` operator, they're doing something wrong. 0x1.999999999999ap-4 ("0.1")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x3.3333333333334p-4 ("0.2")
+0x1.999999999999ap-4 ("0.1")
---------------------
=0x4.cccccccccccf0p-4 ("0.30000000000000004")
!=0x4.cccccccccccccp-4 ("0.3")The only leaky abstraction here is our bias towards decimal. (Fun fact: "base 10" is meaningless, because every base calls itself base 10)
Storage, retrieval, transmission, and serialization/deserialization systems should be able to transmit and round-trip floats without losing any bits at all.
[1]: https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_09_weird_f... (division result matrix)
(a / b) * (c / d) * (e / f)
to (a * c * e) / (b * d * f)
as a performance optimization. The result of each division in the original was all roughly one due to how the variables were computed, but the latter was sometimes unstable because the products could produce denomalized numbers.Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.
You should be using == for floats when they're actually equal. 0.1 just isn't an actual number.
> 1.25 * 0.1
0.1250000000000000069388939039Yes.
1.25 = 2^0 + 2^-2, so is representable.
0.125 = 2^-3, so is representable
1.25 / 10.0 = 0.125 so is representable. 10.0 = 2^3 + 2^1.
1.25 * 0.1 is not representable, because 0.1 is not representable, and those low order bits show up in the multiplication
A finitist computer scientists only accepts those numbers as real that can be expressed exactly in finite base-two floating point?
0.1 is just as non-representable in floating point as is pi as is 100^100 in a 32 bit integer.
Terminating dyadic rationals (up to limits based on float size) are the representable values.
double m_D{}; [...]
if (m_D == 0) somethingNeedsInstantiation();
can avoid having to carry around, set and check some extra m_HasValueBeenSet booleans.Of course, it might not be something you want to overload beginner programmers with.
But I regret not making an exception for the constant zero, because it's one of the cases where you probably should accept it. I.e. if (f != 0.0) {...}
The linter wouldn't know where f came from, so it should flag all floating point equality cases, and have some way that you can annotate it for "yeah this one is okay."
if (f == 0.0) means "is f exactly zero so it's not initialized" 99 times for every one time it means "is f zero-ish because of a cancellation/degeneracy/whatever"
I just found that I have now annotated it for "yeah this one is ok" about 100 times, and caught zero cases where I meant to do a comparison to zero-or-very-nearly-so but accidentally wrote == 0.0.
So my conclusion is: I would have had less noise in my code with that exception in the linter, and the linter had been equally useful.