This doesn’t solve the general problem that binary floating point cannot exactly represent decimal fractions.
The first example
new Double('0.3').sub(new Double('0.1')).toNumber()
does the equivalent of
float f = (float)(0.3 - 0.1);
which happens to produce a ‘float32’ that prints as “0.2”, but for ‘float64’ vs ‘float128’.
I expect that adding a sufficient number of zeroes, as in
double f = (float128)0.00003 - (float128)0.00001);
(actual number of zeroes is too low here, to prevent wrapping on phone screens) will surface the problem again.