The reason why decimal is used for banking is because it can represent decimal fractions (as used by Humans) precisely, but that's a rather arbitrary choice. As soon as you need to calculcate interest rates and the like, both formats generate loss, necesarily.
Russia had the first decimal currency in 1704, the UK switched in 1971. It's not an arbitrary choice.
Obviously if you are calculating a percentage of 0.01 then you cannot represent that as money as there are no fractions of a penny. That isn't lossy as you aren't losing anything that could be represented by real money.
Floating point on the other hand cannot represent decimal currency at all. It is impossible to store most 2 digit decimal numbers accurately in a floating point number. All you can do is store the closest number and then round it when displaying it. However this causes a problem when you do something as mundane as adding two numbers together.
You are right that you can't represent 1/3 in decimal, but that isn't a problem at all for accountants. Not being able to add 0.10 & 0.10 and get 0.20 is. caveat: I don't know if this is a real example, but there are situations where two trivial values cannot be added up as I was once given the task of fixing a system using floating point. I replaced it with 64 bit integer maths (on a z80 without a 64 bit maths package).
I hope you're not trying to justify using floating point for financial calculations as that would be worrying.