Welcome, Guest. Please login or register.

Author Topic: ColdFire Project?  (Read 7379 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline Piru

  • \' union select name,pwd--
  • Hero Member
  • *****
  • Join Date: Aug 2002
  • Posts: 6946
    • Show only replies by Piru
    • http://www.iki.fi/sintonen/
Re: CofFire Project?
« Reply #29 from previous page: September 27, 2007, 02:23:22 PM »
Quote
Some say they are probably burning much of their cycles running a JIT compiler to handle those instruction differences I mentioned between CF and 68K.

Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75, by just using the stock emulation library (no JIT). That just doesn't add up... :-)

Lie, bigger lie, benchmark by the manufacturer...
 

Offline Donar

  • Full Member
  • ***
  • Join Date: Aug 2006
  • Posts: 168
    • Show only replies by Donar
Re: CofFire Project?
« Reply #30 on: September 27, 2007, 03:05:35 PM »
Quote
To make clean coldfire builds, you'll need to edit legacy 68K code to replace these missing and misinterpeted instructions. Luckily the list of edits will probably be small.


From C++ Source you only need to set Coldfire as target for the compiler - it will avoid "bad" instructions then.

If you have Assembler source you have to run it through "PortASM" (tool provided by freescale) or pick the bad ones out by hand and replace them.

Another option, if you don't have the source, would be to disassemble an executable pick out the "baddies" and assemble it to an executable or provide a patch file for the original executable. That probably is the least desired thing.

That's at least how i understood it.
<- Amiga 1260 / CD ->
Looking for:
A1200/CF CFV4/@200,256MB,eAGA,SATA,120GB,AROS :D
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #31 on: September 27, 2007, 04:02:29 PM »
Quote

Piru wrote:
Quote
Some say they are probably burning much of their cycles running a JIT compiler to handle those instruction differences I mentioned between CF and 68K.

Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75, by just using the stock emulation library (no JIT). That just doesn't add up... :-)

Lie, bigger lie, benchmark by the manufacturer...

Those benchmarks were for the CPU running native code for the interrupt handlers with 68k applications.

People say it's running at 020 speeds... hmmmm... and it's in a machine with an 020.  Makes you wonder doesn't it?  Was the Coldfire even doing the work?

Perhaps they are running 100% emulation which would be the slowest.  They probably couldn't get enough compatibility to run the OS with the stock emulation library.  Which is why I said you need the OS source so you can rewrite it.

I worked on a project based on the 64180 where illegal instructions were used to replace instruction sequences that were commonly generated by a C compiler and the slowdown was about 20% on it so I don't doubt the manufacturer's estimates.  However, if the exec also has to use the lib it will be slower... I think the slowdown was 50%.  That's why I said the OS had to be Coldfire native code.  

Even if some of the OS has been converted to native code, if certain parts aren't you might start stacking interrupts or have other unpredictable interaction between hardware and software.

Remember, any instruction that is native to the Coldfire runs at 266MHz.  That's actually most of the 68k instruction set and addressing modes.  Then you have, oh... say a penalty of 20+ instructions when you hit one that isn't.  But remember... the 4e is partially superscaler so it gets around 1 instruction / clock cycle or better.  Even if it takes 20 clocks to emulate 1 instruction it's still going to avarage out to faster than an 020.

I noticed Oli posted that the V5 core was ready for release in 2002.  I'd guess HP offered big bucks to have an exclusive on it for several years.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #32 on: September 27, 2007, 04:06:50 PM »
Quote

Donar wrote:
Quote
To make clean coldfire builds, you'll need to edit legacy 68K code to replace these missing and misinterpeted instructions. Luckily the list of edits will probably be small.


From C++ Source you only need to set Coldfire as target for the compiler - it will avoid "bad" instructions then.

If you have Assembler source you have to run it through "PortASM" (tool provided by freescale) or pick the bad ones out by hand and replace them.

Another option, if you don't have the source, would be to disassemble an executable pick out the "baddies" and assemble it to an executable or provide a patch file for the original executable. That probably is the least desired thing.

That's at least how i understood it.

It was my understanding that port asm didn't perform any optimization so it increases code size significantly.  And I'm not sure if it was ever updated for the 4e core... I haven't looked at it in years though.
 

Offline little

  • Full Member
  • ***
  • Join Date: Sep 2007
  • Posts: 223
    • Show only replies by little
Re: CofFire Project?
« Reply #33 on: September 27, 2007, 04:11:07 PM »
Quote
Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75

I am no expert, but this probably has to do with the coldfire having to use the s-l-o-w chip ram inside the a1200, that is another reason to put the coldfire inside a completely new aros machine.
 

Offline little

  • Full Member
  • ***
  • Join Date: Sep 2007
  • Posts: 223
    • Show only replies by little
Re: CofFire Project?
« Reply #34 on: September 27, 2007, 04:15:58 PM »
Quote
1. Most instructions are run natively

I think this is the best method for any application

Quote
2. 100% emulation of the program with native OS calls

I do not undestand what advantage this method has over #1 or #3 :-?

Quote
3. 100% emulation with JIT

IMO this would be the best method for running games, creave a virtual amiga in the workbench, like winuae but integrated into the OS.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #35 on: September 27, 2007, 04:23:30 PM »
Quote

little wrote:
Quote
Possibly. However, accoding to jdiffend it should be at least twice as fast as 060@75

I am no expert, but this probably has to do with the coldfire having to use the s-l-o-w chip ram inside the a1200, that is another reason to put the coldfire inside a completely new aros machine.

If the Coldfire has to use 25MHz RAM it's going to run at 25MHz whenever it accesses it.  ROM access will be slow as well (where most of the OS is).  If the Coldfire board has fast RAM on it, it should run at full speed there.
You'd have to move the ROM to RAM to get any kind of speed out of the thing.

Really, the Amiga was never designed with that fast of a CPU in mind.  To completely take advantage of that fast of a CPU you almost have to redo the entire architecture.  At the very least you'd have to run the buss at several times the speed and give the chipset access to 1 out of how ever many cycles to make it run at stock speed when in compatibility mode.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #36 on: September 27, 2007, 04:32:11 PM »
Quote

little wrote:
Quote
1. Most instructions are run natively

I think this is the best method for any application

Yes but illegal instructions are trapped and emulated.  Not everything can be trapped and it's slower than Coldfire safe code.

Quote
Quote
2. 100% emulation of the program with native OS calls

I do not undestand what advantage this method has over #1 or #3 :-?

Not all differences between the Coldfire and 68K can be accounted for with #1.  Some addressing modes can't be emulated because the illegal instruction interrupt doesn't occur in a place where you can go back and decode the instruction properly.  And some math instructions have different behavior in regards to setting flags but they are perfectly legal so they can't be trapped.
100% emulation avoids those problems but it's slow.

Quote
Quote
3. 100% emulation with JIT

IMO this would be the best method for running games, creave a virtual amiga in the workbench, like winuae but integrated into the OS.

Everyone probably agrees... but it's complex and takes time to write.
 

Offline Hans_

Re: CofFire Project?
« Reply #37 on: September 27, 2007, 05:05:11 PM »
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Hans
Join the Kea Campus - upgrade your skills; support my work; enjoy the Amiga corner.
https://keasigmadelta.com/ - see more of my work
 

Offline mumule

  • Newbie
  • *
  • Join Date: Sep 2007
  • Posts: 25
    • Show only replies by mumule
Re: CofFire Project?
« Reply #38 on: September 27, 2007, 05:22:36 PM »

jdiffend wrote:
Quote


Well, there could be several kinds of emulation.  

1. Most instructions are run natively and most unsupported instructions or addressing modes are trapped and emulated by an illegal instruction trap, then execution continues normally until the next one.  This would be very fast but probably have 10%-20% slowdown from coldfire safe code and it won't run everything.  The code to do this already exists.

2. 100% emulation of the program but OS calls are executed natively by the emulator.  Faster than full emulation but slower than the first or last method.  The code could probably be extracted from an existing Amiga emulator.


3. 100% emulation with JIT generation of equivalent instruction sequences.  It would run any 68k code but would require a lot of work.  OS calls would be native and the JIT wouldn't need to examine them.


4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...
 

Offline Plaz

Re: CofFire Project?
« Reply #39 on: September 27, 2007, 05:25:02 PM »
Quote
Any body here who want's to bring the AROS 68k port out of unmaintained, where it sits for long now?


I'm told there is some one who's started some work again. I've also stuck my nose in for a look.

Plaz
 

Offline Piru

  • \' union select name,pwd--
  • Hero Member
  • *****
  • Join Date: Aug 2002
  • Posts: 6946
    • Show only replies by Piru
    • http://www.iki.fi/sintonen/
Re: CofFire Project?
« Reply #40 on: September 27, 2007, 06:05:52 PM »
@Hans_
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Not easily.

You'd need to manually disassemble everything in a way it can be recompiled. This is highly demanding work which requires tons of assembly knowledge and time. It cannot be automated.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #41 on: September 27, 2007, 06:21:19 PM »
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  
 

Offline potis21

  • Newbie
  • *
  • Join Date: Feb 2007
  • Posts: 18
    • Show only replies by potis21
Re: CofFire Project?
« Reply #42 on: September 27, 2007, 06:31:36 PM »
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #43 on: September 27, 2007, 06:34:45 PM »
Quote

Piru wrote:
@Hans_
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly? If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.

Not easily.

You'd need to manually disassemble everything in a way it can be recompiled. This is highly demanding work which requires tons of assembly knowledge and time. It cannot be automated.

Actually, the disassembler I used to make the commented 2.? exec disassembly generated code that could be reassembled.

A program could be written to take that, make the patches with the existing tool and then optimize the code to remove stuff that isn't needed.  
It would need to use register tracking similar to a modern optimizing C++ compiler AND would need to trace flag usage.  
It would take a LONG time to build such a beast from scratch.  I've spent some time in the guts of a few C compilers and it's not easy work... then add some stuff those don't have to do and it's a huge undertaking.  All for a computer that has been off the market for over a decade.
 

Offline koaftder

  • Hero Member
  • *****
  • Join Date: Apr 2004
  • Posts: 2116
    • Show only replies by koaftder
    • http://koft.net
Re: CofFire Project?
« Reply #44 on: September 27, 2007, 06:39:29 PM »
Quote

potis21 wrote:
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.


They won't do it because there is no market for such a product that is large enough to cover development costs and fab time.