> Most capable programmers just don't think to use a == b for floating points, because it isn't going to work a lot of the time, but for someone who isn't aware of that minefield they can wander in and get some very stupid bugs.
Well, there are two separate issues here:
1) The default for programming should be decimal floating point instead of binary floating point. Suddenly, your strange "a == b" for floating point works just fine because 0.1 is exactly representable. Thus, addition, subtraction, and multiplication behave just like integer. Computers are now fast enough that any language in which you don't actually specify a floating point, should default to decimal. This covers 99% of the silly cases for beginners, and benefits people doing actual monetary calculations.
2) Programmers need to be able to specify when they want binary floating point for speed along with all the weirdness.
Decimal floating point is really no better than binary floating point, and suggesting that people should use it instead suggests that you don't actually understand what floating point is really about.
If you're dealing with money, for example, you'll want to use fixed-point, not any kind of floating point. Floating point expands its range by effectively representing numbers uniformly across the log space. If you need to deal with calculations that contain both 1e-12 and 1e12, you really want that logarithmic property. But money has a fixed scale: you don't need to represent any numbers between 0 and 0.01¢ (fractional cents are useful, but I think 1/100 is the smallest you'd need). A 64-bit binary floating number at that scale can only precisely store every number up to a tad over $900 billion, and even decimal floating point can only scooch that up to a tad under $1 trillion. A 64-bit integer in fixed point could handle $1.8 quadrillion precisely.
Decimal floating point doesn't fix rounding errors. It fixes superficial rounding error in that 0.1 is a terminating decimal in base 10 but not in base 2, but that's not really a problem that happens. It doesn't fix the fact that subtraction and division have the potential to destroy significant digits, and it doesn't fix the inability to accurately mix large and small numbers--both of which cause many more problems in practice, particularly when you try to do things like invert a matrix.
I find that the typical developer’s typical work doesn’t require irrational numbers at all. Of courses those specifically doing numeric computing would be best served with traditional floating point, but I truly think most don’t. Maybe my experience is biased then.
The typical developer's typical work mostly requires integers, and when they need reals, 64-bit floats are nearly always sufficient. I would even say that arbitrary-size integers are needed more often than true rationals.
Well, there are two separate issues here:
1) The default for programming should be decimal floating point instead of binary floating point. Suddenly, your strange "a == b" for floating point works just fine because 0.1 is exactly representable. Thus, addition, subtraction, and multiplication behave just like integer. Computers are now fast enough that any language in which you don't actually specify a floating point, should default to decimal. This covers 99% of the silly cases for beginners, and benefits people doing actual monetary calculations.
2) Programmers need to be able to specify when they want binary floating point for speed along with all the weirdness.