Amiga.org

Amiga computer related discussion => Amiga Hardware Issues and discussion => Topic started by: wildstar1063 on September 26, 2007, 08:53:31 PM

Title: ColdFire Project?
Post by: wildstar1063 on September 26, 2007, 08:53:31 PM
Is the Coldfire project still active at all?
 the web page is still there, but it has not
 been updated for several years it seems.


Thanks

Wildstar1063
Title: Re: CofFire Project?
Post by: Zac67 on September 26, 2007, 09:00:21 PM
And probably won't be updated at all. There's much consent here that a Coldfire creates major problems in software compatibility and seems to be no good choice for the Amiga, running legacy software. It's not impossible, but once you've mastered all problems, it's probably not much faster than an '060 - if at all.
Title: Re: CofFire Project?
Post by: little on September 26, 2007, 09:10:45 PM
Quote
but once you've mastered all problems, it's probably not much faster than an '060 - if at all.


If you relegate it to the role oe emulating a 68k then it is not a good solution, but if (and only if) amigaOS and a bunch of key applications were compiled for the coldfire procesor it could achive quite nice speed improvements, specialy if the 400mhz v5 coldfire was used.

Title: Re: CofFire Project?
Post by: Zac67 on September 26, 2007, 09:24:54 PM
Of course, that would provide a tremendous speed increase. But you'd have to recompile, which isn't an option for the vast amount of legacy software. Another way would be a patch database, but I doubt the user base could reach the critical mass for that.
For the emulation option it's a lot smarter to start with a faster and more readily available CPU like a cheap & fast x86.

One day someone will do an '060 simulation in FPGA. In the VERY far future...
Title: Re: CofFire Project?
Post by: A1260 on September 26, 2007, 09:30:17 PM
its dead........
Title: Re: CofFire Project?
Post by: AJCopland on September 26, 2007, 10:06:55 PM
Quote
wildstar1063 wrote:
I the Coldfire project still active at all?


I wish. I think that Oli got it so far but ran into problems that really would have required a lot more work to resolve and that's before he got into the software side of things.

Its been suggested that using the ColdFire as a co-processor instead of a replacement 68k might make more sense much like the PPC cards are but if you're going down that route you might as well use PPC.

Anyway the topic is dead in most peoples minds though I'd like to see one made as a replacement for the 68060/40 cards since they're so expensive, old and hard to find... I mean which would people rather have? A slightly slower than 060 card based on a ColdFire? Or a full speed 060 based card... that you can never afford off ebay?

(assuming that Oli fnished the ColdFire project of course)

Andy
Title: Re: CofFire Project?
Post by: rkauer on September 26, 2007, 11:12:52 PM
 I think the best solution for the Coldfire instruction set issue is adding a parallel "small" Spartan to catch the instructions who are not compatible with 68k.

 That could be a way to fix all problems in a snap.

 Then we get the best of two worlds: the speed and low cost of  a Coldfire (compared to a 060) and the complete set of 68k instructions, with no issues.

 My two cents.
Title: Re: CofFire Project?
Post by: Piru on September 26, 2007, 11:28:42 PM
@rkauer
Quote
I think the best solution for the Coldfire instruction set issue is adding a parallel "small" Spartan to catch the instructions who are not compatible with 68k.

How would that work exactly? How can some external chip know what the CPU is doing?
Title: Re: CofFire Project?
Post by: Ragoon on September 26, 2007, 11:32:53 PM
The coldfire since core V4e is fully compatible with the 68040. The first generation (5102) was already fully compatible with the 68EC040. It is an enhanced version of the originals 68k (410 MIPS @ 260MHz for actual coldfire and 110 Mips @ 75 MHz for 68060).
The coldfire embedded network, pci, usb, ... controllers. It is a great integrated processor but not enough powerful to face the PowerPC. It is a good solution to make an Amiga classic compatible computer with dedicated chipset in a FPGA. But the developpement costs would be too expensive for interested people.
Title: Re: CofFire Project?
Post by: rkauer on September 26, 2007, 11:53:19 PM
Quote

Piru wrote:

How would that work exactly? How can some external chip know what the CPU is doing?


 I mean use a Spartan as a translator table to catch the instructions the Coldfire can't process in the right way. Indeed it will generate some slowdowns, but in the end we can get an operational CPU who behaves like a 060 ~44MHz, who is a major upgrade from old accelerators around (I mean 030 and 040s).

Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 01:18:23 AM
@Ragoon
Quote
The coldfire since core V4e is fully compatible with the 68040. The first generation (5102) was already fully compatible with the 68EC040. It is an enhanced version of the originals 68k (410 MIPS @ 260MHz for actual coldfire and 110 Mips @ 75 MHz for 68060).

They are not compatible.

Supervisor mode is totally different. User mode is different, and even with emulation library loaded the user mode has a small difference which cannot be trapped.
Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 01:19:52 AM
@rkauer
Quote
I mean use a Spartan as a translator table to catch the instructions the Coldfire can't process in the right way.

But how can you do that? How can you make the spartan interface with the CPU in such way?
Title: Re: CofFire Project?
Post by: rkauer on September 27, 2007, 01:50:02 AM
Quote

Piru wrote:

But how can you do that? How can you make the spartan interface with the CPU in such way?


 What I think is only a well-intended guess.

 This is almost the same as a PPC board acts. Those Spartan CPUs have near the double IO pins as the 68k CPUs, so is not a big worry in the hardware connection, but must be a worry in SW (I think a nightmare is the correct word).

 The voltage level is the same, clocks can be decoupled, instructions can be interpreted BEFORE trowed to Coldfire...

 My 2 cents...
 
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 01:50:35 AM
The coldfire mail list still has some activity as discussions continue about possible solutions. Basically there about about 6 instructions that behave differently on coldfire that can't be traped. I think it can be solved (Elbox must have done it) but it would take alot more work using a card much different than the current prototype.

Plaz
Title: Re: CofFire Project?
Post by: downix on September 27, 2007, 01:58:40 AM
Well, could a recompilation system do it?  Using AROS you could make an OS which would, rather than JIT emulation, re-compile the apps.  The coldfire is similar enough that it could work.
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 02:09:09 AM
Quote
Well, could a recompilation system do it? Using AROS you could make an OS which would, rather than JIT emulation, re-compile the apps. The coldfire is similar enough that it could work.


The trouble is those wayward instuctions also may exist in the OS not just the apps. So just like pokemon, you got to catch them all. :-P

Some say JIT is the only way and why bother, but I think a prescann-replace solution could work. It would be similar to the way vmware handles code on non-100% compatibly x86 families.

Plaz
Title: Re: CofFire Project?
Post by: downix on September 27, 2007, 02:12:37 AM
Quote

Plaz wrote:
Quote
Well, could a recompilation system do it? Using AROS you could make an OS which would, rather than JIT emulation, re-compile the apps. The coldfire is similar enough that it could work.


The trouble is those wayward instuctions also may exist in the OS not just the apps. So just like pokemon, you got to catch them all. :-P

Some say JIT is the only way and why bother, but I think a prescann-replace solution could work. It would be similar to the way vmware handles code on non-100% compatibly x86 families.

Plaz

How would an OS compiled for the Coldfire not run on the Coldfire?  Note, I said AROS, not AmigaOS, so you could compile it for the right cpu faily.
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 02:36:14 AM
Quote
How would an OS compiled for the Coldfire not run on the Coldfire? Note, I said AROS, not AmigaOS, so you could compile it for the right cpu faily.


Oh, I misunderstood. Yes that would be different. You just have to edit the 68K source code before compiling replacing the offending code with coldfire acceptable code. There are instruction on the web about which instructions you need to replace and how in order to compile 68K code for coldfire.

Plaz
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 02:42:20 AM
Quote

Piru wrote:
@Ragoon
Quote
The coldfire since core V4e is fully compatible with the 68040. The first generation (5102) was already fully compatible with the 68EC040. It is an enhanced version of the originals 68k (410 MIPS @ 260MHz for actual coldfire and 110 Mips @ 75 MHz for 68060).

They are not compatible.

Supervisor mode is totally different. User mode is different, and even with emulation library loaded the user mode has a small difference which cannot be trapped.

Piru is correct on this.  You need to rewrite the ROM to be able to take advantage of even a 4e Core Coldfire.

FWIW, I don't think the author of the Coldfire project had the proper background to complete such a complex design but that's just my opinion.

The biggest problem is that to really use the coldfire you need the OS source code so you can make as much if it coldfire native as possible.  The OS would also take up more space since more instructions are needed to do the same thing.

*IF* you rewrite the exec, timer device and some other stuff that's needed just to get the computer started then you can think about using the illegal instruction trap to emulate missing instructions.  BTW, someone took the commented disassembly of the exec I made and got it running on a V2 developer board but I haven't spoken with him in a LONG time so I'm not sure what ever became of it.

The addressing modes that can't be emulated were slow and I'm guessing were rarely used (certainly not by most compilers) but the math instructions that are legal but function differently are bound to create some problems.  Since identical behavior isn't always required some software would work and some wouldn't.

People that claim a Coldfire wouldn't be faster than an 060 either haven't looked at the benchmark info from Motorola or they are just ignoring it.  Extensive code analysis of millions of lines of code showed *at most* a 30% performance drop emulating existing 68K code on older Coldfire chips (V2 core).  I also believe that was 68000 code, not 060 code which has fewer incompatibilities.  Even if their estimates are optimistic the 4e has added many more instructions and addressing modes over the V2 so it should run 68k code much faster than the V2.  Software should be over twice as fast on the 260MHz 4e than on the 75MHz 060.  Anything that could be recompiled to native coldfire code would run over 3 times as fast and that's if they don't introduce faster clock speeds.  They have mentioned faster clock speeds in the past so I'd expect to see 400+MHz parts within a few years.
Title: Re: CofFire Project?
Post by: little on September 27, 2007, 02:56:49 AM
Quote
I'd expect to see 400+MHz parts within a few years.

400 Mhz is here! There are already 400mhz v5 coldfire CPUs inside HP printers, oddly enough there is no information about it in the freescale site so I suppose motorola is working closely with HP and have not updated the site since they are selling their entire production to HP (remember new chips are always produced in smaller quantities while they "debug" the production line).
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 03:06:03 AM
Quote
There are already 400mhz v5 coldfire CPUs inside HP printers, oddly enough there is no information about it in the freescale site


Freescale will make custom coldfires to suit your needs. Maybe HP walked in with a bag of money and said... "we need one that does this", and freescale built them one to HP's specs. Too bad amigaland doesn't have the same kind of resources. We're stuck with normal wares off the shelf. ........ Heeeeyyyyy, what if DiscreetFX.......... nah.

Plaz
Title: Re: CofFire Project?
Post by: little on September 27, 2007, 03:55:06 AM
Quote
But you'd have to recompile, which isn't an option for the vast amount of legacy software.

But if the (AR)OS is coldfire native then you could emulate legacy software, look how apple emulated 68k System 9 inside PPC OS X, this will be even better because coldfire is a progression of the 68k architecture, not a complete rewrite like the PPC. So maybe the older games or games that used a lot of tricks might not run or run slowly, bugs in the emulation can be removed with time and you would have a modern OS with modern (at least open source) applications and maybe even some games. Of course we need someone to build such a machine, with a PowerVR chipset for graphics if posible to deliver modern graphics. At least I think that it would be cheaper to build a complete machine anyone can buy than to make an upgrade card aimed to a shrinking market.
Title: Re: CofFire Project?
Post by: tomazkid on September 27, 2007, 03:55:07 AM
What about the Dragon then?
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 07:00:33 AM
Quote

little wrote:
Quote
But you'd have to recompile, which isn't an option for the vast amount of legacy software.

But if the (AR)OS is coldfire native then you could emulate legacy software, look how apple emulated 68k System 9 inside PPC OS X, this will be even better because coldfire is a progression of the 68k architecture, not a complete rewrite like the PPC. So maybe the older games or games that used a lot of tricks might not run or run slowly, bugs in the emulation can be removed with time and you would have a modern OS with modern (at least open source) applications and maybe even some games. Of course we need someone to build such a machine, with a PowerVR chipset for graphics if posible to deliver modern graphics. At least I think that it would be cheaper to build a complete machine anyone can buy than to make an upgrade card aimed to a shrinking market.


Well, there could be several kinds of emulation.  

1. Most instructions are run natively and most unsupported instructions or addressing modes are trapped and emulated by an illegal instruction trap, then execution continues normally until the next one.  This would be very fast but probably have 10%-20% slowdown from coldfire safe code and it won't run everything.  The code to do this already exists.

2. 100% emulation of the program but OS calls are executed natively by the emulator.  Faster than full emulation but slower than the first or last method.  The code could probably be extracted from an existing Amiga emulator.


3. 100% emulation with JIT generation of equivalent instruction sequences.  It would run any 68k code but would require a lot of work.  OS calls would be native and the JIT wouldn't need to examine them.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 07:04:20 AM
Quote

Plaz wrote:
Quote
There are already 400mhz v5 coldfire CPUs inside HP printers, oddly enough there is no information about it in the freescale site


Freescale will make custom coldfires to suit your needs. Maybe HP walked in with a bag of money and said... "we need one that does this", and freescale built them one to HP's specs. Too bad amigaland doesn't have the same kind of resources. We're stuck with normal wares off the shelf. ........ Heeeeyyyyy, what if DiscreetFX.......... nah.

Plaz

I knew the V5 had been scheduled to be done by now but wasn't aware it was out yet.  HP will probably have first dibs on chips for a while.
If I remember right, the V5 was supposed to be fully superscaler so it should be faster than the 4e at the same clock speed.

It's hard to compete with a company that orders 10000 or more chips at a shot.
Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 08:15:22 AM
@jdiffend
Quote
People that claim a Coldfire wouldn't be faster than an 060 either haven't looked at the benchmark info from Motorola or they are just ignoring it. Extensive code analysis of millions of lines of code showed *at most* a 30% performance drop emulating existing 68K code on older Coldfire chips (V2 core). I also believe that was 68000 code, not 060 code which has fewer incompatibilities. Even if their estimates are optimistic the 4e has added many more instructions and addressing modes over the V2 so it should run 68k code much faster than the V2. Software should be over twice as fast on the 260MHz 4e than on the 75MHz 060.

So how come Elbox Dragon is 040 speeds then?
Title: Re: CofFire Project?
Post by: Donar on September 27, 2007, 08:23:21 AM
Quote
What about the Dragon then?


Oh i wrote some e-mails to Elbox, one answer said they were polishing drivers, the next they were readying production.
And that was umm, half a year ago.

I fear they showed a Dragon from which only the PCI part was working probably using the 020 of the host A1200. Everybody mentioned that it was slow e.g. on loading Icons.

Quote
I said AROS, not AmigaOS, so you could compile it for the right cpu faily.

Any body here who want's to bring the AROS 68k port out of unmaintained, where it sits for long now?
AROS Kickstart ROM Replacement Phase I (http://thenostromo.com/teamaros2/index.php?number=23)



Quote

You just have to edit the AROS 68K source code before compiling replacing the offending code with coldfire acceptable code.

I think the AROS code is in c or c++ so a simple recompile to Coldfire or 68k/Coldfire mixed code should do the trick without editing the source. That only would be needed if the source is in assembler.
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 01:19:11 PM
Quote
I think the AROS code is in c or c++ so a simple recompile to Coldfire or 68k/Coldfire mixed code should do the trick without editing the source. That only would be needed if the source is in assembler.


Generally Coldfire and 68K code is equal. But some 68K instructions don't exist on coldfire. In other cases (and I'm mega simplifying here) InstructionA makes coldfire turn left, while the same instruction makes a 68020 turn right. To make clean coldfire builds, you'll need to edit legacy 68K code to replace these missing and misinterpeted instructions. Luckily the list of edits will probably be small.

Plaz

Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 01:23:10 PM
Quote
So how come Elbox Dragon is 040 speeds then?


Some say they are probably burning much of their cycles running a JIT compiler to handle those instruction differences I mentioned between CF and 68K.

Plaz
Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 02:23:22 PM
Quote
Some say they are probably burning much of their cycles running a JIT compiler to handle those instruction differences I mentioned between CF and 68K.

Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75, by just using the stock emulation library (no JIT). That just doesn't add up... :-)

Lie, bigger lie, benchmark by the manufacturer...
Title: Re: CofFire Project?
Post by: Donar on September 27, 2007, 03:05:35 PM
Quote
To make clean coldfire builds, you'll need to edit legacy 68K code to replace these missing and misinterpeted instructions. Luckily the list of edits will probably be small.


From C++ Source you only need to set Coldfire as target for the compiler - it will avoid "bad" instructions then.

If you have Assembler source you have to run it through "PortASM" (tool provided by freescale) or pick the bad ones out by hand and replace them.

Another option, if you don't have the source, would be to disassemble an executable pick out the "baddies" and assemble it to an executable or provide a patch file for the original executable. That probably is the least desired thing.

That's at least how i understood it.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 04:02:29 PM
Quote

Piru wrote:
Quote
Some say they are probably burning much of their cycles running a JIT compiler to handle those instruction differences I mentioned between CF and 68K.

Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75, by just using the stock emulation library (no JIT). That just doesn't add up... :-)

Lie, bigger lie, benchmark by the manufacturer...

Those benchmarks were for the CPU running native code for the interrupt handlers with 68k applications.

People say it's running at 020 speeds... hmmmm... and it's in a machine with an 020.  Makes you wonder doesn't it?  Was the Coldfire even doing the work?

Perhaps they are running 100% emulation which would be the slowest.  They probably couldn't get enough compatibility to run the OS with the stock emulation library.  Which is why I said you need the OS source so you can rewrite it.

I worked on a project based on the 64180 where illegal instructions were used to replace instruction sequences that were commonly generated by a C compiler and the slowdown was about 20% on it so I don't doubt the manufacturer's estimates.  However, if the exec also has to use the lib it will be slower... I think the slowdown was 50%.  That's why I said the OS had to be Coldfire native code.  

Even if some of the OS has been converted to native code, if certain parts aren't you might start stacking interrupts or have other unpredictable interaction between hardware and software.

Remember, any instruction that is native to the Coldfire runs at 266MHz.  That's actually most of the 68k instruction set and addressing modes.  Then you have, oh... say a penalty of 20+ instructions when you hit one that isn't.  But remember... the 4e is partially superscaler so it gets around 1 instruction / clock cycle or better.  Even if it takes 20 clocks to emulate 1 instruction it's still going to avarage out to faster than an 020.

I noticed Oli posted that the V5 core was ready for release in 2002.  I'd guess HP offered big bucks to have an exclusive on it for several years.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 04:06:50 PM
Quote

Donar wrote:
Quote
To make clean coldfire builds, you'll need to edit legacy 68K code to replace these missing and misinterpeted instructions. Luckily the list of edits will probably be small.


From C++ Source you only need to set Coldfire as target for the compiler - it will avoid "bad" instructions then.

If you have Assembler source you have to run it through "PortASM" (tool provided by freescale) or pick the bad ones out by hand and replace them.

Another option, if you don't have the source, would be to disassemble an executable pick out the "baddies" and assemble it to an executable or provide a patch file for the original executable. That probably is the least desired thing.

That's at least how i understood it.

It was my understanding that port asm didn't perform any optimization so it increases code size significantly.  And I'm not sure if it was ever updated for the 4e core... I haven't looked at it in years though.
Title: Re: CofFire Project?
Post by: little on September 27, 2007, 04:11:07 PM
Quote
Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75

I am no expert, but this probably has to do with the coldfire having to use the s-l-o-w chip ram inside the a1200, that is another reason to put the coldfire inside a completely new aros machine.
Title: Re: CofFire Project?
Post by: little on September 27, 2007, 04:15:58 PM
Quote
1. Most instructions are run natively

I think this is the best method for any application

Quote
2. 100% emulation of the program with native OS calls

I do not undestand what advantage this method has over #1 or #3 :-?

Quote
3. 100% emulation with JIT

IMO this would be the best method for running games, creave a virtual amiga in the workbench, like winuae but integrated into the OS.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 04:23:30 PM
Quote

little wrote:
Quote
Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75

I am no expert, but this probably has to do with the coldfire having to use the s-l-o-w chip ram inside the a1200, that is another reason to put the coldfire inside a completely new aros machine.

If the Coldfire has to use 25MHz RAM it's going to run at 25MHz whenever it accesses it.  ROM access will be slow as well (where most of the OS is).  If the Coldfire board has fast RAM on it, it should run at full speed there.
You'd have to move the ROM to RAM to get any kind of speed out of the thing.

Really, the Amiga was never designed with that fast of a CPU in mind.  To completely take advantage of that fast of a CPU you almost have to redo the entire architecture.  At the very least you'd have to run the buss at several times the speed and give the chipset access to 1 out of how ever many cycles to make it run at stock speed when in compatibility mode.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 04:32:11 PM
Quote

little wrote:
Quote
1. Most instructions are run natively

I think this is the best method for any application

Yes but illegal instructions are trapped and emulated.  Not everything can be trapped and it's slower than Coldfire safe code.

Quote
Quote
2. 100% emulation of the program with native OS calls

I do not undestand what advantage this method has over #1 or #3 :-?

Not all differences between the Coldfire and 68K can be accounted for with #1.  Some addressing modes can't be emulated because the illegal instruction interrupt doesn't occur in a place where you can go back and decode the instruction properly.  And some math instructions have different behavior in regards to setting flags but they are perfectly legal so they can't be trapped.
100% emulation avoids those problems but it's slow.

Quote
Quote
3. 100% emulation with JIT

IMO this would be the best method for running games, creave a virtual amiga in the workbench, like winuae but integrated into the OS.

Everyone probably agrees... but it's complex and takes time to write.
Title: Re: CofFire Project?
Post by: Hans_ on September 27, 2007, 05:05:11 PM
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Hans
Title: Re: CofFire Project?
Post by: mumule on September 27, 2007, 05:22:36 PM

jdiffend wrote:
Quote


Well, there could be several kinds of emulation.  

1. Most instructions are run natively and most unsupported instructions or addressing modes are trapped and emulated by an illegal instruction trap, then execution continues normally until the next one.  This would be very fast but probably have 10%-20% slowdown from coldfire safe code and it won't run everything.  The code to do this already exists.

2. 100% emulation of the program but OS calls are executed natively by the emulator.  Faster than full emulation but slower than the first or last method.  The code could probably be extracted from an existing Amiga emulator.


3. 100% emulation with JIT generation of equivalent instruction sequences.  It would run any 68k code but would require a lot of work.  OS calls would be native and the JIT wouldn't need to examine them.


4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...
Title: Re: CofFire Project?
Post by: Plaz on September 27, 2007, 05:25:02 PM
Quote
Any body here who want's to bring the AROS 68k port out of unmaintained, where it sits for long now?


I'm told there is some one who's started some work again. I've also stuck my nose in for a look.

Plaz
Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 06:05:52 PM
@Hans_
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Not easily.

You'd need to manually disassemble everything in a way it can be recompiled. This is highly demanding work which requires tons of assembly knowledge and time. It cannot be automated.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 06:21:19 PM
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  
Title: Re: CofFire Project?
Post by: potis21 on September 27, 2007, 06:31:36 PM
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 06:34:45 PM
Quote

Piru wrote:
@Hans_
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Not easily.

You'd need to manually disassemble everything in a way it can be recompiled. This is highly demanding work which requires tons of assembly knowledge and time. It cannot be automated.

Actually, the disassembler I used to make the commented 2.? exec disassembly generated code that could be reassembled.

A program could be written to take that, make the patches with the existing tool and then optimize the code to remove stuff that isn't needed.  
It would need to use register tracking similar to a modern optimizing C++ compiler AND would need to trace flag usage.  
It would take a LONG time to build such a beast from scratch.  I've spent some time in the guts of a few C compilers and it's not easy work... then add some stuff those don't have to do and it's a huge undertaking.  All for a computer that has been off the market for over a decade.
Title: Re: CofFire Project?
Post by: koaftder on September 27, 2007, 06:39:29 PM
Quote

potis21 wrote:
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.


They won't do it because there is no market for such a product that is large enough to cover development costs and fab time.
Title: Re: CofFire Project?
Post by: mumule on September 27, 2007, 06:44:45 PM
Quote

jdiffend wrote:
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  


Not really. for next year already you should hit 100-200 MHz.
There are enough 32bit cpus in FPGAs which can prove that.
Title: Re: CofFire Project?
Post by: Piru on September 27, 2007, 06:45:20 PM
@jdiffend
Quote
Actually, the disassembler I used to make the commented 2.? exec disassembly generated code that could be reassembled.

A program could be written to take that, make the patches with the existing tool and then optimize the code to remove stuff that isn't needed.
It would need to use register tracking similar to a modern optimizing C++ compiler AND would need to trace flag usage.
It would take a LONG time to build such a beast from scratch. I've spent some time in the guts of a few C compilers and it's not easy work... then add some stuff those don't have to do and it's a huge undertaking. All for a computer that has been off the market for over a decade.

Exec is quite trivial, it only has couple of arrays in it (which has relative offsets in them).

It gets much hairier with complex code that has baserel references, or even worse references with varying base. Those are pretty much impossible to resolve without actually executing the code.

Another problem is that you can't always be sure what is code and what is data. Automation will not get that right every time.

So while it might work for some random samples, it might easily produce sourcecode that while building ok, actually produces broken code, in worst cases in a way that it doesn't crash, but rather just generate bogus results.

There is no way to workaround this via automated software. Human interaction and guidance is required.
Title: Re: CofFire Project?
Post by: jdiffend on September 27, 2007, 06:47:19 PM
Quote

potis21 wrote:
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.


Amiga Community - "hey freescale, could you recreate the 68040 in a modern die process so we can run it at higher speeds?  And add some more cache while you are at it."

Freescale - "The die needs a rework to switch to a smaller die process since things don't just directly translate as is and it would need to be modified anyway to change the cache.  None of the original designers are still with us and it would require orders for several hundred thousand to make it economically feasible.  It would also take several man years of work.  How many did you want?"

Amiga Community - "A couple hundred... maybe even a thousand!"

Freescale - "Have you looked at the coldfire?"



Just switching the die process does not guarantee 400MHz.  All other CPUs have undergone significant design changes to the architecture to accommodate faster speeds.  And often products are delayed for months to adapt to a new die process because not everything works the same.
Remember, they abandoned the 68K line BECAUSE it wasn't scalable.

Oh, and that's before you have to interface the beast to the Amiga.  Just plugging it into an existing 040 board gains you nothing.
Title: Re: CofFire Project?
Post by: Donar on September 27, 2007, 06:51:40 PM
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly?

The Name of the Tool is "PortASM" from MicroAPL.

Quote
If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.


I had the same idea but i got several errors - i must admit i know nothing about what i was doing there...

 
Title: Re: CofFire Project?
Post by: little on September 27, 2007, 09:36:26 PM
Quote
100% emulation avoids those problems but it's slow.

I suppose this would be a good option for bad (OS-wise) behaving applications or good behaving games :-D
Title: Re: CofFire Project?
Post by: jdiffend on September 28, 2007, 04:36:03 AM
Quote

Piru wrote:
Exec is quite trivial, it only has couple of arrays in it (which has relative offsets in them).

LOL, trivial he says.  

Quote
It gets much hairier with complex code that has baserel references, or even worse references with varying base. Those are pretty much impossible to resolve without actually executing the code.

Another problem is that you can't always be sure what is code and what is data. Automation will not get that right every time.

So while it might work for some random samples, it might easily produce sourcecode that while building ok, actually produces broken code, in worst cases in a way that it doesn't crash, but rather just generate bogus results.

There is no way to workaround this via automated software. Human interaction and guidance is required.

Well, I wouldn't ever say "no workaround" as an absolute but from some of the stuff I've disassembled... it definitely requires some human involvement.  Resource(?) was a pretty good tool but I think it could have had more automation than it did.

As Amiga becomes more and more worthless we could buy up the company and have the source code.   :-D
Title: Re: CofFire Project?
Post by: jdiffend on September 28, 2007, 04:54:18 AM
Quote

mumule wrote:
Quote

jdiffend wrote:
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  


Not really. for next year already you should hit 100-200 MHz.
There are enough 32bit cpus in FPGAs which can prove that.

All the affordable FPGAs seem to be able to manage with 8 bit cpu core's is 25MHz.  I've heard of one doing 40MHz but I'm sure it's on a faster FPGA too.  The fastest FPGAs are very expensive and *might* make those speeds possible but the FPGA alone would cost much more than most people would be willing to spend.

Not only that but just because there are cores that do it doesn't mean it was easy getting one to do it.  One of the biggest problems in achieving those speeds with a CPU core is you just about have to have a pipelined architecture with cache, burst mode RAM access... all sorts of stuff.  It's not just a simple core like most of the ones you can download.

Now, if you could afford to have a custom ASIC made... there is a Z80 core that has been made to run over 400MHz so it's definitely possible with an ASIC.  And that *is* a core you download.