No, the mandelbrot computations only use add,sub and multiplication. Thus, MuRedox makes no difference here. The only traps that may occur are due to non-normalized results where the FPU requires some help. IEEE uses the same instructions, but includes software overhead to load the numbers from the CPU registers into the FPU registers and back. While that makes typically no difference (the called function is long, the register ping-pong is short - intuition!) it makes a difference here. The called function is short (a single add, or sub, or mul) and the overhead is large compared to the actual function. For your average all-day purpose, it will hardly make any difference, indeed. But for that purpose, you don't need an FPU in first place either.
I thought that mandelbrot used 6888x logarithm instructions but I see that the basic algorithm uses mostly normal fp math.
Do you mean, it uses IEEE for compiling - or IEEE for the running program? The latter is switchable, but the former is pretty critical. To parse floating point constants in C code correctly, you need a *higher* precision than that used for computing in the program (otherwise, you get an additional loss in the compilation phase you want to avoid). For optimizing, you should run in the C compiler exactly the same computations as the code would have performed, so that's not good news either. Gcc has its own math library for emulating various FPUs and math models, and yes - for good compilation and optimization, this is really required.
This may be true for compiling direct FPU code with the IEEE library using version of vbcc but not so for when compiling IEEE versions of programs where the lower precision becomes the standard precision. Yes, it would be good to make direct FPU using compiles of vbcc available as well. I will suggest this when the new version is finalized. I could always make publically available unofficial compiles of the new version of vbcc as well.
Most versions of GCC compiled code open the IEEE double precision math libs (mixing IEEE lib and direct FPU code) which auto changes the FPCR to double precision rounding. GCC also likes to use the FD
and FS instructions which are good for IEEE compliance but precision is less than the 68k FPU supports. Vbcc uses regular F instructions even for 68040 and 68060 FPU libraries so the code will execute on 68881-68060. This gives extra intermediate precision and backward compatibility at the cost of IEEE compliance but the extra precision may be lost at function calls where double precision fp values are passed to functions (except where inlines can maintain extended precision). The FPCR rounding precision can be changed to double precision using C99 functions for better IEEE compliance. Vbcc 68k may eventually get fp register passing libraries with full extended precison as the overhead of passing extended precision values on the stack is expensive. My point is that there isn't any current 68k compiler that I am aware of which is capable of maintaining full extended precision. You will get considerably less with direct compiled code or with the IEEE libraries.
True, except that handling of the files or exchanging modules within the kickstart is harder, i.e. the overall user experience is not quite as good for updates. Otherwise, when I remember the Natami here, it booted so fast it made no difference whether it went through another reset or not, so I don't need an updated rom for this machine in first place. Protecting modules can be done easily by MuProtectModules, no need for a ROM actually.
There is more effort in compiling kickstarts for developers but the result is easy to distribute (like any archive) and there should be less install and corruption issues. The new fpga hardware will initially not have MMUs but they can still have MAPROM support with write protection.