Welcome, Guest. Please login or register.

Author Topic: ColdFire Project?  (Read 2568 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline koaftder

  • Hero Member
  • *****
  • Join Date: Apr 2004
  • Posts: 2116
    • Show only replies by koaftder
    • http://koft.net
Re: CofFire Project?
« Reply #44 from previous page: September 27, 2007, 06:39:29 PM »
Quote

potis21 wrote:
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.


They won't do it because there is no market for such a product that is large enough to cover development costs and fab time.
 

Offline mumule

  • Newbie
  • *
  • Join Date: Sep 2007
  • Posts: 25
    • Show only replies by mumule
Re: CofFire Project?
« Reply #45 on: September 27, 2007, 06:44:45 PM »
Quote

jdiffend wrote:
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  


Not really. for next year already you should hit 100-200 MHz.
There are enough 32bit cpus in FPGAs which can prove that.
 

Offline Piru

  • \' union select name,pwd--
  • Hero Member
  • *****
  • Join Date: Aug 2002
  • Posts: 6946
    • Show only replies by Piru
    • http://www.iki.fi/sintonen/
Re: CofFire Project?
« Reply #46 on: September 27, 2007, 06:45:20 PM »
@jdiffend
Quote
Actually, the disassembler I used to make the commented 2.? exec disassembly generated code that could be reassembled.

A program could be written to take that, make the patches with the existing tool and then optimize the code to remove stuff that isn't needed.
It would need to use register tracking similar to a modern optimizing C++ compiler AND would need to trace flag usage.
It would take a LONG time to build such a beast from scratch. I've spent some time in the guts of a few C compilers and it's not easy work... then add some stuff those don't have to do and it's a huge undertaking. All for a computer that has been off the market for over a decade.

Exec is quite trivial, it only has couple of arrays in it (which has relative offsets in them).

It gets much hairier with complex code that has baserel references, or even worse references with varying base. Those are pretty much impossible to resolve without actually executing the code.

Another problem is that you can't always be sure what is code and what is data. Automation will not get that right every time.

So while it might work for some random samples, it might easily produce sourcecode that while building ok, actually produces broken code, in worst cases in a way that it doesn't crash, but rather just generate bogus results.

There is no way to workaround this via automated software. Human interaction and guidance is required.
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #47 on: September 27, 2007, 06:47:19 PM »
Quote

potis21 wrote:
why not ask freescale just to... scale a 68040 into new integration technology and just implement more L1 and L2 ?

Wouldn't it be just the most compatible solution?

smaller photomaster is the way to go.

New technologies woud allow the raw 400 MHz out of the processor just by scaling the design and reducing operating voltage and will have a totally analog performance ratio.


Amiga Community - "hey freescale, could you recreate the 68040 in a modern die process so we can run it at higher speeds?  And add some more cache while you are at it."

Freescale - "The die needs a rework to switch to a smaller die process since things don't just directly translate as is and it would need to be modified anyway to change the cache.  None of the original designers are still with us and it would require orders for several hundred thousand to make it economically feasible.  It would also take several man years of work.  How many did you want?"

Amiga Community - "A couple hundred... maybe even a thousand!"

Freescale - "Have you looked at the coldfire?"



Just switching the die process does not guarantee 400MHz.  All other CPUs have undergone significant design changes to the architecture to accommodate faster speeds.  And often products are delayed for months to adapt to a new die process because not everything works the same.
Remember, they abandoned the 68K line BECAUSE it wasn't scalable.

Oh, and that's before you have to interface the beast to the Amiga.  Just plugging it into an existing 040 board gains you nothing.
 

Offline Donar

  • Full Member
  • ***
  • Join Date: Aug 2006
  • Posts: 168
    • Show only replies by Donar
Re: CofFire Project?
« Reply #48 on: September 27, 2007, 06:51:40 PM »
Quote
Wasn't there a tool to convert 68k assembly to coldfire assembly?

The Name of the Tool is "PortASM" from MicroAPL.

Quote
If there is one, it should be possible to create coldfire binaries for the whole OS using a suitable disassembler and this tool.


I had the same idea but i got several errors - i must admit i know nothing about what i was doing there...

 
<- Amiga 1260 / CD ->
Looking for:
A1200/CF CFV4/@200,256MB,eAGA,SATA,120GB,AROS :D
 

Offline little

  • Full Member
  • ***
  • Join Date: Sep 2007
  • Posts: 223
    • Show only replies by little
Re: CofFire Project?
« Reply #49 on: September 27, 2007, 09:36:26 PM »
Quote
100% emulation avoids those problems but it's slow.

I suppose this would be a good option for bad (OS-wise) behaving applications or good behaving games :-D
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #50 on: September 28, 2007, 04:36:03 AM »
Quote

Piru wrote:
Exec is quite trivial, it only has couple of arrays in it (which has relative offsets in them).

LOL, trivial he says.  

Quote
It gets much hairier with complex code that has baserel references, or even worse references with varying base. Those are pretty much impossible to resolve without actually executing the code.

Another problem is that you can't always be sure what is code and what is data. Automation will not get that right every time.

So while it might work for some random samples, it might easily produce sourcecode that while building ok, actually produces broken code, in worst cases in a way that it doesn't crash, but rather just generate bogus results.

There is no way to workaround this via automated software. Human interaction and guidance is required.

Well, I wouldn't ever say "no workaround" as an absolute but from some of the stuff I've disassembled... it definitely requires some human involvement.  Resource(?) was a pretty good tool but I think it could have had more automation than it did.

As Amiga becomes more and more worthless we could buy up the company and have the source code.   :-D
 

Offline jdiffend

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 302
    • Show only replies by jdiffend
Re: CofFire Project?
« Reply #51 on: September 28, 2007, 04:54:18 AM »
Quote

mumule wrote:
Quote

jdiffend wrote:
Quote

mumule wrote:

4. Put the cpu into an FPGA, and debug it. At the end of the day it is easier to do a CPU right, then debug all application code ...

And you are stuck running at under 40MHz because it's not an ASIC.  


Not really. for next year already you should hit 100-200 MHz.
There are enough 32bit cpus in FPGAs which can prove that.

All the affordable FPGAs seem to be able to manage with 8 bit cpu core's is 25MHz.  I've heard of one doing 40MHz but I'm sure it's on a faster FPGA too.  The fastest FPGAs are very expensive and *might* make those speeds possible but the FPGA alone would cost much more than most people would be willing to spend.

Not only that but just because there are cores that do it doesn't mean it was easy getting one to do it.  One of the biggest problems in achieving those speeds with a CPU core is you just about have to have a pipelined architecture with cache, burst mode RAM access... all sorts of stuff.  It's not just a simple core like most of the ones you can download.

Now, if you could afford to have a custom ASIC made... there is a Z80 core that has been made to run over 400MHz so it's definitely possible with an ASIC.  And that *is* a core you download.