This has happened on every platform (from C64 to x64) I've coded on and why in Windows I now use "decimal" instead of floats or doubles...
And when you say "in Windows", you mean .NET, right?
How various floats are defined vary with the languages used, how the various languages used on Amiga will deal with this when running on AC68080 is anyone's guess I suppose. Is there even a way for software to know whether they are running on a "full" AC68080 FPU or just the limited, stripped down variant in the current Vampire boards? Is it clear exactly what kind of accuracy the Apollo Core FPU for V2 Vampires is operating with? I have only seen guesses by various programmers (alb42), the information on the "official sites" do not offer much insight. Is the current FPU implementation "done deal", or will it be improved upon further (aside from straight out bugs)?
In any case, we don't have the luxury of a thriving and active developer community on the Amiga, let alone commercial support from Microsoft and the like. What we have is a library of legacy software for which the source code mostly is lost or not available for legal /copyright reasons. And even when sources are available, porting them to current toolchains can quickly be a daunting task in itself.