The decimal number 0.1 has an infinitely repeating binary fraction.
Consider how 1/3 in decimal is 0.33333… If you truncate that to some finite prefix, you no longer have 1/3. Now let’s suppose we know, in some context, that we’ll only ever have a finite number of digits — let’s say 5 digits after the decimal point. Then, if someone asks “what fraction is equivalent to 0.33333?”, then it is reasonable to reply with “1/3”. That might sound like we’re lying, but remember that we agreed that, in this context of discussion, we have a finite number of digits — so the value 1/3 outside of this context has no way of being represented faithfully inside this context, so we can only assume that the person is asking about the nearest approximation of “1/3 as it means outside this context”. If the person asking feels lied to, that’s on them for not keeping the base assumptions straight.
So back to floating point, and the case of 0.1 represented as 64 bit floating point number. In base 2, the decimal number 0.1 looks like 0.0001100110011… (the 0011 being repeated infinitely). But we don’t have an infinite number of digits. The finite truncation of that is the closest we can get to the decimal number 0.1, and by the same rationale as earlier (where I said that equating 1/3 with 0.33333 is reasonable), your programming language will likely parse “0.1” as a f64 and print it back out as such. However, if you try something like (a=0.1; a+a+a) you’ll likely be surprised at what you find.
I very much doubt it. My day job is writing symbolic-numeric code. The result of 0.1+0.1+0.1 != 0.3, but for rounding to bring it up to 0.31 (i.e. rounding causing an error of 1 cent), you would need to accumulate at least .005 error, which will not happen unless you lose 13 out of your 16 digits of precision, which will not happen unless you do something incredibly stupid.
$ python3
Python 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 0.1
>>> a + a + a
0.30000000000000004
By way of explanation, the algorithm used to render a floating point number to text used in most languages these days is to find the shortest string representation that will parse back to an identical bit pattern. This has the direct effect of causing a REPL to print what you typed in. (Well, within certain ranges of "reasonable" inputs.) But this doesn't mean that the language stores what you typed in - just an approximation of it.Edit: Now it does it fine after inputting floats:
puts [ expr { 1.0/7.0 } ]
Eforth on top of Subleq, a very small and dumb virtual machine:
1 f 7 f f/ f.
0.143 ok
Still, using rationals where possible (and mod operations otherwise) gives a great 'precision', except for irrationals.I interpreted "directly representable" as "uniquely representable", all < 15 digit decimals are uniquely represented in fp64 so it is always safe to roundtrip between those decimals <-> f64, though indeed this guarantee is lost once you perform any math.
Stop at any finite number of bits, and you get an approximation. On most machines today, floats are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two. In the case of 1/10, the binary fraction is 3602879701896397 / 2 * 55 which is close to but not exactly equal to the true value of 1/10.
Many users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display:
0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead: 1 / 10
0.1
That being said, double should be fine unless you're aggregating trillions of low cost transactions. (API calls?) >>> from decimal import Decimal
>>> Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')