@matt
Two matters that make things a bit less scary:
1. Fpga isnt set in stone hardware, it might be reprogrammed and adjusted according to the reaction it meets. Reacting to the feedback would be wise on part of developers. But there always will be different opinions, thats sure.
Yes, this is true. Phil "meynaf" also did not like the direction of the new ISA but challenged for it to be created as a test. It is possible to learn from doing things the hard way. The few people that write code for the new ISA would be disappointed if the ISA changed significantly. The new ISA is unlikely to be (even partially) adopted in the TG68 or other 68k fpga cores or eventually UAE (I believe Toni's position would change if multiple fpga hardware was using the same ISA) . It is less likely that a non-standard complex 68k ISA for a single fpga CPU would gain wide spread support in compilers. A core which is more compatible with both 68k and ColdFire is more likely to be interesting to embedded developers. Enough money could probably gain a custom ISA and then we could have a thousand variations of an ISA like ARM but this is what I hoped to avoid. People thought I was too early trying to push for the creation of a standarized ISA and trying to get input from others. I tried to create a standards committee/group by bringing in people to our discussions including inviting ThoR, Frank Wille and Dave Alsup (of Innovasic). I would have loved to bring in people like Tony Wilen, Jason McMullan, Volker Barthelman, Kalms and maybe even a Karlos who have understanding of an ISA from different view points. I guess people are too busy or believe the Amiga is too dead to care anymore. At least Gunnar is doing something.
2. Any extensions beyond what the legacy 68k provides will not have much effect until compiler backends will assimilate these extensions. Which will not happen any soon, giving time to reconsider. The legacy instruction set and its execution efficiency is what counts atm.
ISA decisions make a big difference in how easily and quickly the ISA changes can be adopted. ColdFire enhancements are the easiest to adopt because they already exist and only need to be switched on in a compiler backend and in peephole optimizing assemblers. I bet Frank Wille could have ColdFire support in vasm working in a few days and already making a noticable difference in shrinking program sizes. ColdFire support in the backend could take a few weeks to add and test as it is a more delicate process to add. Taking advantage of the current ISA with more registers in the backend would likely take many months and bugs could turn up for years. Few developers are knowledgable and familiar enough with a compiler to add this kind of support. Are they going to dedicate this kind of time for a non-standard in an fpga CPU sold in the hundreds or low thousands at the most when they could be improving a compiler target with tens of thousands of hard processors? I don't think so. Phoenix is not going to immediately set the world on fire. IMO, it's better to have an easy standard to adopt with a few benefits and incremental improvements than a core specific non-standard with theoretical high performance that will never be utilized completely in compilers. Splits seem to be the Amiga way though. I'm tired of arguing and trying to create something better. Gunnar did make the right decision to add better 68020 compatibility (all addressing modes without trapping) and we do have this as a base which is the most important thing. We are moving forward past the 68060 in performance with this too. I should be thankful as we need new 68k hardware to revitalize the Amiga. I would have liked to create something like a cross between an Amiga Raspberry Pi, a Natami and a CD32+ but there is not enough cooperation, at least not yet.