units = 2
price = 3.17
2 * 3.17 = ERROR
It blows my mind that so much effort has put into things like functions taking functions as arguments, and the characteristics of classes - yet a computer can't handle basic calculator math out of the box. PHP, Ruby, Swift, OCaml.. what gives?
What is the complexity behind this?
Computer numbers is not math numbers, and its better to always remember this fact.
[1] https://en.wikipedia.org/wiki/Multiplicative_group_of_intege...
Here are links to get you started:
- https://stackoverflow.com/questions/1089018/why-cant-decimal...
- https://stackoverflow.com/questions/5098558/float-vs-double-...
> It blows my mind that so much effort has put into things like functions taking functions as arguments, and the characteristics of classes - yet a computer can't handle basic calculator math out of the box.
Much effort was put into it, but no amount of work allows a computer to violate the laws of math. The fact that computers use binary and have limited memory isn't something that can be hand-waved away.
You're also talking about two totally different types of problems. Classes make programming languages easier to organize (at least theoretically) -- it's easy to change the syntax of a language, but hard to know exactly how it should look to be the easiest to use.
And you know what, why haven't they invented human-level AI yet? Are they just that lazy?
Don't even get me started on chemists... too lazy and stupid to learn a simple method to turn iron into gold.
In any higher level language the default numeric variable should be an algebraic decimal float, not a binary int or float. Doesn't mean ints and floats shouldn't exist, but they shouldn't be the default.
Meaning when a programmer writes
number x = 2/3;
number should equal 2/3. Not an approximation of 2/3.Because historically CPUs worked efficiently natively on integers, and everything else was software. (Modern CPUs also work efficiently natively on floats, but with different representation and operations.)
Note that not all languages (even o n your list) have the problem you describe though; plenty of automatic coercion with numeric operators so, e.g., float times int works (doing a float operation). Ruby, for instance, does this.
And many also do the calculation you present exactly (not merely without errors) because they treat decimal literals as exact numbers (using either a decimal or rational type, rather than binary floating point.) Perl 6 and Scheme, for instance. I think Haskell also can, if the right priority for numeric literal types is set.
> What is the complexity behind this?
It's not really complex, it's just a matter of prioritizing making performance vs. accuracy & generality simple, with a dash of consideration of programming history which shapes expectations.