I don't think anyone involved with the Phoenix/Apollo project has considered a software library for 100% ColdFire compatibility up to ISA_C (excluding MAC, EMAC and FPU) but it could be done if there was a specific purpose and enough demand.
I wonder what that would be good for. Are there any serious software applications on CF that are worth looking at? CF is rather an embedded processor, and as such, rarely interacts with the user in a desktop system. Probably in your washing machine. (-:
There are libraries of ColdFire code and compilers which are more modern than what the 68k has. There is a ColdFire embedded market which is larger than the total 68k market (although probably shrinking) and needs a replacement which could be Phoenix if it was compatible enough.
Question would be whether there is anything valuable in the libraries that can be used in Amiga application - and worth the incompatibility.
For compiled code to take advantage, the compiler support and backend code would need to be updated. Adding FPU registers can be done in an orthogonal way (as I proposed anyway) which would make this job much easier.
And break the exec scheduler, and programs - mostly debuggers - that depend on the stack layout of the exec scheduler. If you want to extend the FPU, it must be done in a backwards (or rather forwards) compatible way, i.e. programs that use only 8 registers continue to use the same register layout in exec.
There could be issues with precision in the vclib code (based on the 68060FPSP) if Gunnar reduces the FPU to 64 bits. Extended precision allows to avoid tricks which are needed to maintain maximum precision with 64 bits only. Personally, I would prefer to stay with extended precision for compatibility but double precision is considerably faster in an FPGA.
Me, too... If you want precise low-level arithmetic, you sometimes need access to some of the lower-order bits, i.e. for 64-bit math, one needs three additional bits to avoid round-off errors. The Mot-CPUs of course have these bits internally when they manipulate the registers, and round them away when storing the data - but if you need to work on the bits yourself (rarely!), so would want to have access to them.
While I see limited use for PC relative writes, I think the encoding space used can not effectivly be used for other purposes and the protection provided by not allowing PC relative writes is a joke.
That is not quite the point. Not having PC-relative writes is a logical and orthogonal choice with the Havard architecture Motorola follows, i.e. separate program and data space. IOWs, d(PC) is a program (code) access and as such *should not* create write cycles. It is a restriction that comes from a higher design principle, and such a principle as leading tradition should hopefully be follwed.
I doubt compilers would bother with creating a new model like small data or small code but it should be possible to make tiny programs a little smaller and more efficient with PC relative writes opened.
Question is: Is it worth the potential incompatibilty, i.e. forking the 68K ISA for such little returns? An ISA fork always creates problems in software support and Os support, users installing the wrong code on the wrong machine, not knowing what they have... I would do that only for features that are "worth the investment", and not for minor stuff like that.
I would be willing to go along with whatever makes the 68k most acceptable.
Acceptable for whom and for what? I mean, how many users of the ISA will be there? Essentially, only a hand full of Amiga folks.
The ColdFire MVS and MVZ instructions can be used very effectively by compilers and peephole optimizing assemblers (an important considerations of an ISA). The support is already available (ready to turn on) as most compilers share the same 68k and CF backend. I'm confident that Frank Wille could have support working in vasm already with a partial benefit in a matter of hours. Turning on the backend CF generation would be a little more work but requires more testing. Sure, it's not going to make a major difference but few integer ISA changes will (exceptions improve branch performance). The applications are obvious enough. Look at the code your layers.library produces and see how many places the MVS and MVZ instructions can be used. The intuition.library is another example where these instructions would be very useful. Of course, the gain is probably only going to be a few percent in performance and code density but it's easy as compilers can use it. Some code would barely use them at all though.
But look, once again: The improved performance by these instructions is rather minimal. The gain in code density is minimal. Is this worth forking the ISA? I really doubt it. Or rather, put in another way: Would anyone be willing to compile a software library in two versions, one with and one without the added instructions, and support both of them, in a "market" as small as the Amiga? I personally would not. So the "cost" is here the support a software vendor has to pay (for users that use the wrong code), and the benefit is "almost not measurable".
I'm surprised you never got into compilers. Your assumptions may be true most of the time but sometimes the 68020 addressing modes and ISA changes do make a big difference. For example, you say the 64 bit multiplication instructions are rare and they are for SAS/C but GCC has been using them since the '90s to convert division by a constant into a multiplication. Simple code like the following compiled for the 68020 with GCC will generate a 64 bit integer instruction.
All nice, but again, see above... Different story. Back then, when Motorola introduced the 68020 ISA, the market was active, and market participants had enough power and interest to invest some of their product development resources into porting to the new features. This is no longer the case nowadays. We are talking about a different situation. A small hobby project like this does not have the power to gain sufficient attraction to make it worthwhile.
Probably, allow me to tell you another story: There is another (older) hobby platform, the Atari 8-bits. There were a couple of user-made extensions using the (backwards compatible) 65816 (or so), a backwards compatible extension of the 6502 in this machine. They also added instructions, 16 bit registers... all nice. Never went anywhere, no applications...
You do not get new applications by offering a 64-bit multiply or a "move byte extended" instruction. You get new applications by identifying current holes in the ISA that enable "killer features" and that address clearly identified performance bottlenecks. a "MOVEZ" does not fill a single hole. It's a nice instruction, but only that. Nice.
The GCC optimization saves quite a few cycles. The 68060 ISA designers failed to recognize that GCC was already using this effectively.
A 64-bit multiply can, at some point, help a lot with performance (I'm talking about the quantizer in many video/audio/picture codecs, been there, done that). So yes, that is a useful addition. Now, where is where is the bottleneck for other additions?
The current vclib is compiled only for the 68000 like you think is good enough for your programs.
68K is good enough in *most* cases. I may, do and will, use in some cases using extensions from the 68020 extended ISA. Rarely, but sometimes. I would probably never compile an entire program for the 68020+ only. I would probably provide, for some applications, specially tuned versions of some critical routines that are hand-optimized, and then add a dispatcher in the code itself. This way, you create a robust application users do not have to worry about which version they have to install.
The Amiga "market" is too small to support "68020+" only programs, leave alone "Apollo only" applications. On more active markets, that is an option, but not here.
It is important to consider what the compiler developers think. They know what they need and can use. They should be part of the process of ISA development but the hardware developers (or should we say Gunnar) dictates what they will get.
I don't object, but in the end, also the compiler writers should understand the user base is. Does it even make sense to support an "Apollo tuned" compiler given the small user basis? As said, I as a developer would probably not care, except for specialized instructions or heavy-duty code, and in that case, I would probably not use a compiler in first place.
We can see that the ISA creation process has become secretive as can be seen by Gunnar refusing to answer questions in public (I showed how it is possible to mark an ISA with a disclaimer saying that it is for evaluation and subject to change). I tried to create an open ISA early for debate to try to avoid exactly these types of problems but most of the feedback I got was "there is no need yet". Even if my foresight is better than most people's hind sight, it wouldn't do me any good because nobody listens to me no matter how right I am. The truth doesn't seem to matter anymore.
I personally had no problem when talking to Gunnar. But in the end, it's his time he is investing, so it is his project. I neither tell you how to write the compiler, or support libraries for the compiler. All I can ask is to get in contact with him. I don't need to agree with him. I can only give hints, ehem, with my, ehem, usual "charm".