Amiga.org
Amiga computer related discussion => Amiga Hardware Issues and discussion => Topic started by: Iggy on April 07, 2012, 02:00:47 PM
-
Freescale is sending my company two MCF54452VR266 samples.
These are V4 Coldfire processors that run at 266MHz with built in support for USB and PCI.
I'm going to examine how hard it would be to interface these with an FPGA based board.
-
Just wondering - what'll you use the Coldfires for?
-
Just wondering - what'll you use the Coldfires for?
I'm not even sure.
I originally requested them to explore building a replacement for an old 68K design my company used to sell.
When I realized Freescale still had a record of the request I asked them to ship them.
I don't think approaches like the Firebee make much sense, but I would like to see how well this V4 variant supports PCI, USB and networking.
And its cheap.
-
Worth having a look at the Atari Coldfire project. Its almost finished and looks amazing! It also uses the v4e Coldfire.
Also worth googling the Elbox Dragon (Amiga Coldfire upgrade prototype)
-
Worth having a look at the Atari Coldfire project. Its almost finished and looks amazing! It also uses the v4e Coldfire.
Also worth googling the Elbox Dragon (Amiga Coldfire upgrade prototype)
I'm familiar with both. The V4 variant I'm exploring is a little different then the one used in the Firebee.
I think the Elbox project didn't go anywhere since OS3.1-3.9 source code is not available for re-compilation.
Of course now we have AROS68K.
That might make a good base for a Coldfire based Amiga.
-
Hi,
Great news!!
See if you can get one of those chip running on an old 3640 Amiga 4000 accelarator board. I know lots of people that would be interested. A lot of people here on Amiga.org would be interested in an upgrade to the old 68000 chip in there machines.
march on old valiant one.
smerf
-
Hi,
Great news!!
See if you can get one of those chip running on an old 3640 Amiga 4000 accelarator board. I know lots of people that would be interested. A lot of people here on Amiga.org would be interested in an upgrade to the old 68000 chip in there machines.
march on old valiant one.
smerf
THAT is an interesting idea Smerf.
How about a replacement for the 3640?
-
A 266 MHz CPU would severly starve on motherboard RAM. There's no way you can avoid a local memory subsystem. While you're at it, don't forget to provide means to use the PCI bus (at minimum, a connector enabling you to route the bus to a replacement daughter board).
Coldfires have several software compatibility issues, so it'll be quite a feat to make present software run without problems.
-
A 266 MHz CPU would severly starve on motherboard RAM. There's no way you can avoid a local memory subsystem. While you're at it, don't forget to provide means to use the PCI bus (at minimum, a connector enabling you to route the bus to a replacement daughter board).
Coldfires have several software compatibility issues, so it'll be quite a feat to make present software run without problems.
How about a Coldfire processor linked to its own memory and an FPGA that emulates the Amiga chipset?
Running a re-compiled version of AROS68K with the CF68KLib library to help run 68K code.
-
See if you can get one of those chip running on an old 3640 Amiga 4000 accelarator board. I know lots of people that would be interested. A lot of people here on Amiga.org would be interested in an upgrade to the old 68000 chip in there machines.
Unfortunately this is not possible as there are differences between 68k and coldfire.
-
How about a Coldfire processor linked to its own memory and an FPGA that emulates the Amiga chipset?
Running a re-compiled version of AROS68K with the CF68KLib library to help run 68K code.
This won't help. The incompatibilities also include the actual applications, most of which you cannot recompile.
Coldfire just doesn't work running amiga 68k applications.
-
In theory, you could modify the binary loader to patch everything it loads by means of a database.
While this wouldn't catch anything loaded by other methods or self-modifying code it would probably work on 95% of the software. However, we'd have a very hard time building the patch database (which would require automated code search and visual inspection of candidates).
-
I assume Elbox abandoned the Dragon because they couldn't overcome the 68k compatibility issues, there was a show held in Poland a few years ago, Elbox were in attendance and demonstrated the Dragon, but as I recall it appeared to be running slower than a bog standard 040 card.
-
In theory, you could modify the binary loader to patch everything it loads by means of a database.
While this wouldn't catch anything loaded by other methods or self-modifying code it would probably work on 95% of the software. However, we'd have a very hard time building the patch database (which would require automated code search and visual inspection of candidates).
While I disagree with Piru that it wouldn't work entirely, he does have a point that compatibility is limited. Also, you've brought up something that always really bugged me. Self modifying code. Without a doubt one of the worst programming practices I've ever run into.
And impossible to patch for.
The first thing anyone following this idea of mine is going to have to accept is that OS' like MorphOS willl have BETTER compatibility with Amiga 68K code then a Coldfire based re-implementation.
The MorphOS JIT compiler can handle on the fly translation of 68K instructions.
The approaches needed for a slower processor like the Coldfire include the CF68KLib library to trap unsupported instructions, patching binaries before execution, possible 68K emulation software (much slower than a PPC), and recompilation.
It can be done, but some software simply isn't going tp run on a system like this.
-
Joska mentions about 90% compatibility with the ColdFire on the Atari with it's patching and traps:
http://www.amiga.org/forums/showthread.php?t=60771&page=3
A 68k fpga processor can handle the self modifying code better than the 68040, 68060 or ColdFire. They should be more compatible also but we will have to see what the performance is like. I expect the Natami Apollo fpga CPU will be competitive with a fast 68060. A ColdFire V4 should be faster with ColdFire code. Maybe the ColdFire could be used for I/O and (DSP like) sound processing. The fpga could then run the Apollo core if your fpga is big enough or the fpgaArcade core otherwise.
-
Joska mentions about 90% compatibility with the ColdFire on the Atari with it's patching and traps:
http://www.amiga.org/forums/showthread.php?t=60771&page=3
A 68k fpga processor can handle the self modifying code better than the 68040, 68060 or ColdFire. They should be more compatible also but we will have to see what the performance is like. I expect the Natami Apollo fpga CPU will be competitive with a fast 68060. A ColdFire V4 should be faster with ColdFire code. Maybe the ColdFire could be used for I/O and (DSP like) sound processing. The fpga could then run the Apollo core if your fpga is big enough or the fpgaArcade core otherwise.
Interesting idea.I wonder how well a 68K softcore and another processor could co-exist.
-
Interesting idea.I wonder how well a 68K softcore and another processor could co-exist.
That would be the tricky part. I expect you would run a kind of simple service and overseer OS on the CF. This isn't so much unlike the ARM CPU on the fpgaArcade. The Natami may be able to have the 68060 and fpga Apollo active at the same time. Synchronizing and sharing data is the difficult part. It's done with the Amiga CPU and custom chip's "processors" like copper and blitter.
-
@Iggy
You've piqued my interest. What company do you work for, some sort of embedded computer company?
On the side of this, I am very interrested in this.
If the Atari Coldfire system is pretty good with compatibility, wouldn't it be possible to use a sort of application layer that could modify instructions to Coldfire specific code.
If not, another model would be to recompile applications that we do have the source for, put the coldfire on a board with a 68k, and have a "wrapper" that is tested with applications before being released and from the results of its use come up with a way to send certain applications to the 68k and others to the Coldfire.
-
Recompiling source code will be fine - you 'just' require the source. However, some instructions behave differently, so without code analysis you will never know if the code at hand works OK.
-
Recompiling source code will be fine - you 'just' require the source. However, some instructions behave differently, so without code analysis you will never know if the code at hand works OK.
If you recompile the code for ColdFire, you could put a ColdFire identifier for the code in the executable. This is how PPC code in an Amiga executable works. There is probably already a ColdFire ELF code identifier if using AROS. Vbbc/vasm may already be able to compile and assemble it. I have assembled ColdFire code on my Amiga into an executable with vasm. I disassembled which verified that it did use the ColdFire instructions.
-
If you recompile the code for ColdFire, you could put a ColdFire identifier for the code in the executable. This is how PPC code in an Amiga executable works. There is probably already a ColdFire ELF code identifier if using AROS. Vbbc/vasm may already be able to compile and assemble it. I have assembled ColdFire code on my Amiga into an executable with vasm. I disassembled which verified that it did use the ColdFire instructions.
He was clearly referring to the fact that you cannot execute 68k code in a transparent manner, even if using the compatibility library, not on how to detect CF binaries.
-
He was clearly referring to the fact that you cannot execute 68k code in a transparent manner, even if using the compatibility library, not on how to detect CF binaries.
You guys should pay attention to Piru's comments and my own on compatibility issues.
The Firebee crew has recompiled there entire OS and uses three different methods/work arounds to address Coldfire incompatibility.
And still their system only runs about 50% of their software.
They're currently considering replacing the CF68KLib library with a more effective tool.
A Coldfire implementation is NOT going to have the compatibility of a 68K system, a FPGA based system, or emulation.
It will be significantly faster on software that can be recompiled, patched, or trapped with effective software.
But its still a Coldfire native system, not a 68K.
Oh, sorry, I forgot to mention. In from the mid '80s to early '90s I worked for Delmar Co in Middletown De where we developed and sold 68K based systems running Microware's OS9.
I developed a relationship with Motorola Semiconductor back then (obtaining early XC samples for all kinds of things like the 6829MMU for the 6809 processor).
Since then, I've maintained my Delaware business license and still do some consulting work (much less then in the past because it doesn't pay as well as it used to).
Freescale still offers me samples of specific items I want to work with. The last time I asked them for something was when I was investigating the MPC8640/8641.
Curiously enough, it was Paul Gentle (@ Varisys) that convinced me to look at Freescale's Qorlq line instead of focusing on the e600 core.
Currently, of all the PPCs in production, Freescale's products based on the new 64bit e5500 and e6500 cores look the most promising.
-
@Iggy.
I've been paying attention, and I now think your best bet would be to try and do this, if you are interested in pursuing it. Somehow set up this CF board as a Co-Processor, in Phase5 card fashion or similar, and from there get AROS recompiled for CF, and run it similarly to 3.9/WarpOS. But, I think it would be better to develop an Amiga PCI card for other systems and have it emulate say an ECS system. It woul pair well with MOS, and you could have slots for CPU and the chipsets from an Amiga, granting otherwise dead Amigas a new lease on life.
-
@Iggy.
I've been paying attention, and I now think your best bet would be to try and do this, if you are interested in pursuing it. Somehow set up this CF board as a Co-Processor, in Phase5 card fashion or similar, and from there get AROS recompiled for CF, and run it similarly to 3.9/WarpOS. But, I think it would be better to develop an Amiga PCI card for other systems and have it emulate say an ECS system. It woul pair well with MOS, and you could have slots for CPU and the chipsets from an Amiga, granting otherwise dead Amigas a new lease on life.
I like all the positive idea, but right now I just need something I can develop with.
I'm currently looking at 68K compiler tools I can run on my X86 system and packages like PortAsm/68K for Coldfire which will allow me to move code to a Coldfire processor.
-
Wouldnt it be better to look at the V5?
-
Wouldnt it be better to look at the V5?
It absolutely would, but Freescale won't sell you one.
The only place you find the V5 is in Laser printer engines.
I'm not into salvaging such a complex BGA.
-
Ohh, didnt know that, that sucks
-
I still don't understand why someone can't simply create a compiler that treats a 68k executable and libraries as intermediate code and creates a compiled native CF version.
In this method, you could then just throw files at it and produce CF binaries, swap out binaries in your app folder and just launch the native CF version(s)...
-
I still don't understand why someone can't simply create a compiler that treats a 68k executable and libraries as intermediate code and creates a compiled native CF version.
There is no way to tell which part of the binary is code and which data. If you translate data in similar manner you'll just corrupt it.
It is also extremely difficult if not impossible to handle code that calculates checksum of the code itself or does dynamic relocation and/or modification to the code before execution (not all self modifying code is bad, patching the code before running it is quite common).
In short, it cannot be done.
-
There is no way to tell which part of the binary is code and which data. If you translate data in similar manner you'll just corrupt it.
What you do know is that the program entry is code. You can start from there and put traps in sections where you are not sure (e.g jmp tables etc.). You could combine it with a cache that remembers translated code.
In a later stage you can make tools that makes binary patches from this cache so this information can be distributed or updated so that the loader doesn't need to find out each time it loads a program.
Non-trivial job, I agree, but what would we hobby programmers do otherwise ?
greets,
Staf.
-
What you do know is that the program entry is code. You can start from there and put traps in sections where you are not sure (e.g jmp tables etc.). You could combine it with a cache that remembers translated code.
In a later stage you can make tools that makes binary patches from this cache so this information can be distributed or updated so that the loader doesn't need to find out each time it loads a program.
Non-trivial job, I agree, but what would we hobby programmers do otherwise ?
greets,
Staf.
I think the key here is that any work you would need to do to get 68k executing on the cold fire, would be true of faster, cheaper processors.
I still think Iggy should have some fun with this project (this is the whole point of AROS after all), but be under no illusions of good or even passable 68k compatiblity :)
-
What you do know is that the program entry is code. You can start from there and put traps in sections where you are not sure (e.g jmp tables etc.). You could combine it with a cache that remembers translated code.
In a later stage you can make tools that makes binary patches from this cache so this information can be distributed or updated so that the loader doesn't need to find out each time it loads a program.
No, it still won't work. This doesn't account for code that dynamically jumps into various parts of code or performs run-time modifications to the code. Static analysis cannot account for these.
The only way to do this reliably is to perform the translation run-time, that is JIT.
-
What you do know is that the program entry is code. You can start from there and put traps in sections where you are not sure (e.g jmp tables etc.).
This is what I was referring to - you should be aware that this is a manual task as exhaustively doing this in software is not possible (except for trivial programs). Someone has to check/patch the 68k code and (hopefully) share his findings with others through a database on the 'net which the binary loader uses in turn to patch code while it's being loaded. Not likely to happen.
-
No, it still won't work. This doesn't account for code that dynamically jumps into various parts of code or performs run-time modifications to the code. Static analysis cannot account for these.
The only way to do this reliably is to perform the translation run-time, that is JIT.
Piru is absolutely right that self modifying code will never work via a translation process, thus requiring JIT.
And as bloodline has pointed out this is better suited to faster processor.
And again, bloodline has pointed out that this is just a project for the fun of it.
Don't expect me to leave behind NG OS' anytime soon.
-
No, it still won't work. This doesn't account for code that dynamically jumps into various parts of code or performs run-time modifications to the code. Static analysis cannot account for these.
For run-time modification I agree, dynamically jmp should be able to be trapped by adding trap code like a debugger does.
greets,
Staf.
-
For run-time modification I agree, dynamically jmp should be able to be trapped by adding trap code like a debugger does.
greets,
Staf.
I agree that that could be trapped, leaving only self modifying code as a problem.
Besides, its inevitable that some software simply isn't going to run on a system like this (unless it also supports 68k emulation or a soft core).