@Thomas: I believe there's a place for binary as well. I wasn't saying there isn't. It depends upon your purpose.
Exactly. It's a matter of the requirements, which is pretty much what Olsen was asking for. I understand that the financial industry (if you call that an industry) has requirements for decimal, but not because of its precision (which is, as I said, lower than binary), but due to legacy reasons, namely that the whole system evolved around the decimal system, and it is hard to switch without introducing additional rounding steps when going from one system to another. These rounding steps are not the problem of the binary system. They are the problem of "backwards compatibility" to a legacy. Concerning requirements: If the requirements are really financial applications, I highly doubt that there is any serious requirement to run that on Amiga hardware. There certainly is for PCs.
It's been in IBM POWER processors since the POWER6. I imagine it's in the Amiga's new processors, but need to confirm.
I cannot tell you about the POWER architecture, but PowerPC (which is related, but not identical to POWER) does not have it. There is neither an integer BCD instruction as far as I can tell (68K does have some elementary BCD operations) nor specific floating point instructions for decimal, nor a decimal datatype (68881/82 have them). So it's all done in software. Which is, actually, the standard way these days, probably with the exception of some specialized hardware. Not saying that there is no need for it, but I don't quite understand why Amiga land needs a hardware based solution for it. Leave alone a software based solution.
That's because it wasn't available in the hardware. It's being done in software because it's needed.
In Amiga land? By whom? What's the application?
Not if you're trying to represent decimal numbers. For example, there's no precise representation for decimal 0.1 in binary. Within a certain amount of precision, decimal floating point can perfectly represent a decimal number. That can't be said about binary floating point numbers.
And your point is? Sorry, but the base of ten is a rather arbitrary choice. Neither can decimal represent the fraction 0.1 of the ternary system represent precisely, so this is hardly a criterium. Otherwise, we should probably use a ternary system. Or probably a system that has a larger set of prime divisors than the decimal. What about the basis of 12 or 60 (even used historically?). If representation of fractions is your goal, these systems are much better than the decimal system... The problem with decimal is that rounding errors accumulate faster than in binary - that's precisely why I posted the link above. It's worth reading. For scientific applications, binary is really better. For financial, decimal is only better due to its legacy, not because "math is easier". It is not.