Uh yeah, I was referring to the 6.499999... In the calculator he had 1.1-1, so it's still floating point rounding error...and again why I use DECIMAL vs single/float/double in code.
For every number system, you'll find numbers you cannot represent exactly. That goes for binary as it does for decimal. The problem with decimal is that the average round-off error is larger than for binary. Or to be more precise, the binary system minimizes the round-off error of all (symmetric) number systems. So, in that sense, it is the better system.
You would, in typical implementations, convert binary to decimal, then find out the precision limit of the binary implementation in decimal numbers, and then round to the last valid decimal digit. For double precision numbers, you get a precision of 15 plus a bit valid digits, meaning that the last valid decimal digit is 15. This digit needs to be rounded away. As long as you do not make a round-trip from decimal back to binary, all is fine and you do not accumulate errors.
What we have here, however, is a platform that pretends to implement double precision, hence the rounding algorithm in the code (provided by the compiler, in this case) assumes (correctly, so far) that there are 15 valid decimal digits, inspects this digit, and rounds from this digit on. Unfortunately, the CPU only computes with 44 valid bits, instead of the 64 bits required for IEEE double, so it has less valid mantissa bits, thus less valid decimal digits, and thus, the rounding is off.
Hence, not even double precision here, leave alone full 80 bits precision. Too bad. If you want reliable numerics (in the sense of: conforming to what the system expects) you would be left with single precision. Unfortunately, I know exactly zero programs that use it, and not even SAS/C supports IEEE single precision as compiler.