@Thomas: I believe there's a place for binary as well. I wasn't saying there isn't. It depends upon your purpose.
Neither binary nor decimal floating point can reprenset 1/3 or 1/7 precisely.
Fractions aren't the issue.
Is anyone really using decimal in hardware these days?
It's been in IBM POWER processors since the POWER6. I imagine it's in the Amiga's new processors, but need to confirm.
(Correcting myself.... In my zeal, I accidentally overstated this case. It's current hardware implementations include POWER, SparcX, z10, and some kind of processor from SilMinds.)
then this is usually in software
That's because it wasn't available in the hardware. It's being done in software because it's needed.
Binary is more precise than decimal.
Not if you're trying to represent decimal numbers. For example, there's no precise representation for decimal 0.1 in binary. Within a certain amount of precision, decimal floating point can perfectly represent a decimal number. That can't be said about binary floating point numbers.