Floating point is all about relative precision. f64 gives you 53 bits of mantissa, so you have precision of about 1.1e-16, almost 16 decimal places. But the 17th place, and sometimes the 16th place, gets clobbered.
Integers have absolute precision, at the expense of either range (say, i64) or arbitrarily growing size.
> What's really surprising is that this number is only ~16 million in single precision floats.
What does this mean? Surely even single precision floating point can represent a number far closer to the original than 16 million? Edit: It appears the closest number in single precision is 9007199000000000.0
Edit2: Oh I see: you mean that the first number that cannot be represented precisely by a single-precision float is ~16 million.