Welcome, Guest. Please login or register.

Author Topic: in case you are interested to test new fpga accelerators for a600/a500  (Read 39209 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline alphadec

  • Full Member
  • ***
  • Join Date: Oct 2003
  • Posts: 118
    • Show only replies by alphadec
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #179 on: March 30, 2015, 08:08:11 PM »
Quote from: Lurch;787017
I would love to be able to do that, I don't see the issue with this. Why are we even saying this is a bad thing?

I don't understand the negativity, price is right, performance is right and being able to slowly add features by firmware update?

A Amiga owners dream.


I totaly agree. I am totaly speechless when reading this negativity that I have been reading in this thread today. Tought every amiga user wanted something new. And now we soon will have it and how does some react.....
Amiga 4Ever
 

Offline wawrzonTopic starter

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #180 on: March 30, 2015, 08:25:58 PM »
negativity? discussion and feedback is necessary especially if we all want the project to succeed.
 

Offline matthey

  • Hero Member
  • *****
  • Join Date: Aug 2007
  • Posts: 1294
    • Show only replies by matthey
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #181 on: March 30, 2015, 08:52:03 PM »
Quote from: ElPolloDiabl;786990
@Matt Hey and Gunnar. Can you make the Coldfire compatible via a software library?

I don't think anyone involved with the Phoenix/Apollo project has considered a software library for 100% ColdFire compatibility up to ISA_C (excluding MAC, EMAC and FPU) but it could be done if there was a specific purpose and enough demand. The focus was to make CF as instruction level (not necessarily binary) compatible as practical. Assembler source code could be converted through the use of aliases and macros but some hand modification would likely be required. For example, MOV3Q is in A-line which is not good for 68k compatibility but an ISA alias could convert it to assemble as a new sign extended longword addressing mode. It would be helpful for CF compatibility if the stack (A7) alignment could be configured to word or longword alignment but I don't know how difficult this would be to do in hardware. The DIVSL/DIVUL encoding conflict and different CC flags for multiplication means that it is not possible to have 100% binary compatibility for CF in an enhanced 68k CPU.

Quote from: biggun;786995
To which Coldfire you want to be compatible?
Which model - which Coldfire ISA?

For me to understand - Can you explain why you want this?

There are libraries of ColdFire code and compilers which are more modern than what the 68k has. There is a ColdFire embedded market which is larger than the total 68k market (although probably shrinking) and needs a replacement which could be Phoenix if it was compatible enough.

Quote from: psxphill;787014
It seems like a bait and switch. Yeah you can have 400mhz 68060 speed, except you need to port your code to it and the new code won't run on a real 68060.

For compiled code to take advantage, the compiler support and backend code would need to be updated. Adding FPU registers can be done in an orthogonal way (as I proposed anyway) which would make this job much easier. The main changes would be interleaving the FPU instructions using the additional registers and coming up with an ABI which passes FPU registers to functions instead of using the stack. I created the vbcc vclib m060.lib math library, fixed a lot of bugs and added many new c99 math functions in a few months. There could be issues with precision in the vclib code (based on the 68060FPSP) if Gunnar reduces the FPU to 64 bits. Extended precision allows to avoid tricks which are needed to maintain maximum precision with 64 bits only. Personally, I would prefer to stay with extended precision for compatibility but double precision is considerably faster in an FPGA.

Quote from: Thomas Richter;787015
However, can anyone explain me the use case for "move.l dx,d(PC)", or the use case for "move zero extended to register"? Sure, that's probably all neat, but the number of applications where such an instruction is useful to increase the speed by an amount that makes an observable difference is near zero. Leave alone without new compilers or assemblers around. Yes, I can imagine that for special applications like decoding a hand-tuned inner decoder logic could be tremendously useful and worth the manual work. But seriously, is anyone saying "Ok, I'll now rewrite my application because I have now a move to dx with zero-extend instruction available, and THAT was exactly what I was missing?".

While I see limited use for PC relative writes, I think the encoding space used can not effectivly be used for other purposes and the protection provided by not allowing PC relative writes is a joke. I doubt compilers would bother with creating a new model like small data or small code but it should be possible to make tiny programs a little smaller and more efficient with PC relative writes opened. I would be willing to go along with whatever makes the 68k most acceptable.

The ColdFire MVS and MVZ instructions can be used very effectively by compilers and peephole optimizing assemblers (an important considerations of an ISA). The support is already available (ready to turn on) as most compilers share the same 68k and CF backend. I'm confident that Frank Wille could have support working in vasm already with a partial benefit in a matter of hours. Turning on the backend CF generation would be a little more work but requires more testing. Sure, it's not going to make a major difference but few integer ISA changes will (exceptions improve branch performance). The applications are obvious enough. Look at the code your layers.library produces and see how many places the MVS and MVZ instructions can be used. The intuition.library is another example where these instructions would be very useful. Of course, the gain is probably only going to be a few percent in performance and code density but it's easy as compilers can use it. Some code would barely use them at all though.

I'm surprised you never got into compilers. Your assumptions may be true most of the time but sometimes the 68020 addressing modes and ISA changes do make a big difference. For example, you say the 64 bit multiplication instructions are rare and they are for SAS/C but GCC has been using them since the '90s to convert division by a constant into a multiplication. Simple code like the following compiled for the 68020 with GCC will generate a 64 bit integer instruction.

Code: [Select]
scanf("%i",&d);
printf("d / 3 = %d\n", d/3);

The GCC optimization saves quite a few cycles. The 68060 ISA designers failed to recognize that GCC was already using this effectively. I'm working on a similar but improved magic number constant generator which I hope can be incorporated into vbcc. It's possible to use magic number constants for 16 bit integer division which GCC does not do. I may be a cycle counter because I know cycles add up but I still go after the big fish. I pay close attention to what compilers can do and where they fail. One thing I can't fix is where programmers fail. Another example of where the 68020 makes a huge difference we recently fixed in vbcc. The current vclib is compiled only for the 68000 like you think is good enough for your programs. Using ldiv() generated 4 divisions (lacking the 68020 32 bit division instrucitons and doing a division for the quotient and again for the remainder) and included a 256 byte table used for clz (lacking the 68020 BFFFO instruction). The next version of vbcc should have 68020 and maybe 68060 compiled versions of vclib but I fixed ldiv() for now with a single inline DIVSL.L in stdlib.h when compiling for the 68020+.

It is important to consider what the compiler developers think. They know what they need and can use. They should be part of the process of ISA development but the hardware developers (or should we say Gunnar) dictates what they will get. We can see that the ISA creation process has become secretive as can be seen by Gunnar refusing to answer questions in public (I showed how it is possible to mark an ISA with a disclaimer saying that it is for evaluation and subject to change). I tried to create an open ISA early for debate to try to avoid exactly these types of problems but most of the feedback I got was "there is no need yet". Even if my foresight is better than most people's hind sight, it wouldn't do me any good because nobody listens to me no matter how right I am. The truth doesn't seem to matter anymore.
 

Offline ferrellsl

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #182 on: March 30, 2015, 08:55:22 PM »
Quote from: Thomas Richter;787012
I'll keep this quote for the time you make some incompatible change. But honestly, why all the "extended integer unit". What's that stuff good for?


If you don't like it, don't use it or buy.  No one is forcing you to participate in testing or making you purchase it.  You obviously have a personal axe to grind based on some of your comments or you feel you could do a better job than Gunnar.  If that's the case, then put up or shut up.
 

Offline ferrellsl

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #183 on: March 30, 2015, 09:06:13 PM »
Quote from: Thomas Richter;787000

And a nightmare for any software provider. Gunnar, we are not talking about the intel enterprise here were we have enough market share to force software companies into using another random extension. It's a very small scene, and as soon as instructions are added, you are also creating incompatibilities.


Seriously?   Amiga hobbyists, tinkerers and programmers are in most cases intelligent enough to use said extensions or to avoid them completely.  And who would be forcing you or any other programmer to use the new CPU extensions that Gunnar is adding?  You act as if Gunnar would be standing with a gun to your head or something.  Again, if you don't want to use them, then don't use them and stop coming up with pointless arguments for the sake of being difficult!

As for incompatibilities, again, which ones?  Old software won't be calling any of the new extensions so your argument is moot.

If you feel so strongly about what Gunnar is doing, then why not participate in the testing and help him develop an even better solution rather than slinging mud?
« Last Edit: March 30, 2015, 09:09:16 PM by ferrellsl »
 

guest11527

  • Guest
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #184 on: March 30, 2015, 09:12:13 PM »
Quote from: ferrellsl;787027
If you don't like it, don't use it or buy.  No one is forcing you to participate in testing or making you purchase it.  You obviously have a personal axe to grind based on some of your comments or you feel you could do a better job than Gunnar.  If that's the case, then put up or shut up.

Neither - nor. Nor do I have an axe somewhere. The problem when modifying the ISA is the value/price ratio. The value of the above extensions are minimal, the cost is potential software incompatibility, potentially causing a lot of useless support requests for whomever creates software. There are really better ways to spend the ISA space.  

Really, Gunnar and I chat frequently, and friendly. But that still does not mean that one cannot have an argument from time to time. I personally would not extend the ISA in that way, or at least for so little returns.
 

guest11527

  • Guest
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #185 on: March 30, 2015, 09:18:46 PM »
Quote from: ferrellsl;787029
Seriously?   Amiga hobbyists, tinkerers and programmers are in most cases intelligent enough to use said extensions or to avoid them completely.  
Unfortunately, history tells another story.  
Quote from: ferrellsl;787029
And who would be forcing you or any other programmer to use the new CPU extensions that Gunnar is adding?  
It's not me who is using such extensions. But it's me whose programs may or may not be impacted by users installing programs that may or may not use such extensions inapprpriately. One thing I have learned over the years is that one can rarely point at a single software and say "look! here is your problem!". It is often a combination of various factors, and another complexity enters now the picture one has to take care of as a developer. That's something I would prefer to avoid.

As a user, this is probably hard to follow, I understand.  
Quote from: ferrellsl;787029
As for incompatibilities, again, which ones?  Old software won't be calling any of the new extensions so your argument is moot.
No, but users might be tempted to install "some software" on old machines, that interacts with some other software on the same old machine - or might not know or care about in which state their board currently is. An ISA has better to be stable.    
Quote from: ferrellsl;787029
If you feel so strongly about what Gunnar is doing, then why not participate in the testing and help him develop an even better solution rather than slinging mud?

You probably don't know... I wrote the Phoenix support library for the small Vampire...
 

Offline wawrzonTopic starter

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #186 on: March 30, 2015, 09:31:57 PM »
Quote from: ferrellsl;787027
If you don't like it, don't use it or buy.  No one is forcing you to participate in testing or making you purchase it.  You obviously have a personal axe to grind based on some of your comments or you feel you could do a better job than Gunnar.  If that's the case, then put up or shut up.


you obviously dont know, that thor collaborates with gunnar on the project and contributes to it as he did to natami.
 

Offline biggun

  • Sr. Member
  • ****
  • Join Date: Apr 2006
  • Posts: 397
    • Show only replies by biggun
    • http://www.greyhound-data.com/gunnar/
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #187 on: March 30, 2015, 09:52:47 PM »
People all is good here.
Thor and my are old friends so to say.
There is nothing wrong to have a technical agree or disagrement sometimes.

Thomas point is that as soon you have a new CPU people might compile software tuned for it.
In the old days we had this problem.
People compiled some application for 68020.
Lets say for example AmIRC, which is not really a CPU eater.
It should run also on 68000.
But compiled for 020 so won't run on 68000.
I think this is the point of Thomas.
That maybe some applications for which a 68030 would be fast enough
will be compiled for Apollo and the executable will not on 68030 anymore.
I fully agree with Thomas that this would be stupid and unneeded.

I agree with Matt that a few percent improvement here and
a few percent improvement there will add up.
Some instruction like MVZ will give a percent more speed here and there,
and better Register usage will give some percents too.
Maybe we get 10-15% more speed this way.
10-15% more speed might not sound much,
but mind that the 10% would already the speed of a full 68030@50Mhz.

I think there will be cases where its all or nothing.
Like for example a h264 decoder. There are usecases
where we know that no old system it fast enough to use them in a sensible way.
Then compiling for Phoenix and getting 15% more speed makes sense.

guest11527

  • Guest
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #188 on: March 30, 2015, 09:54:45 PM »
Quote from: matthey;787026
I don't think anyone involved with the Phoenix/Apollo project has considered a software library for 100% ColdFire compatibility up to ISA_C (excluding MAC, EMAC and FPU) but it could be done if there was a specific purpose and enough demand.
I wonder what that would be good for. Are there any serious software applications on CF that are worth looking at? CF is rather an embedded processor, and as such, rarely interacts with the user in a desktop system. Probably in your washing machine. (-:  
Quote from: matthey;787026
There are libraries of ColdFire code and compilers which are more modern than what the 68k has. There is a ColdFire embedded market which is larger than the total 68k market (although probably shrinking) and needs a replacement which could be Phoenix if it was compatible enough.
Question would be whether there is anything valuable in the libraries that can be used in Amiga application - and worth the incompatibility.  
Quote from: matthey;787026
For compiled code to take advantage, the compiler support and backend code would need to be updated. Adding FPU registers can be done in an orthogonal way (as I proposed anyway) which would make this job much easier.  
And break the exec scheduler, and programs - mostly debuggers - that depend on the stack layout of the exec scheduler.  If you want to extend the FPU, it must be done in a backwards (or rather forwards) compatible way, i.e. programs that use only 8 registers continue to use the same register layout in exec.  
Quote from: matthey;787026
There could be issues with precision in the vclib code (based on the 68060FPSP) if Gunnar reduces the FPU to 64 bits. Extended precision allows to avoid tricks which are needed to maintain maximum precision with 64 bits only. Personally, I would prefer to stay with extended precision for compatibility but double precision is considerably faster in an FPGA.
Me, too... If you want precise low-level arithmetic, you sometimes need access to some of the lower-order bits, i.e. for 64-bit math, one needs three additional bits to avoid round-off errors. The Mot-CPUs of course have these bits internally when they manipulate the registers, and round them away when storing the data - but if you need to work on the bits yourself (rarely!), so would want to have access to them.  
Quote from: matthey;787026
While I see limited use for PC relative writes, I think the encoding space used can not effectivly be used for other purposes and the protection provided by not allowing PC relative writes is a joke.  
That is not quite the point. Not having PC-relative writes is a logical and orthogonal choice with the Havard architecture Motorola follows, i.e. separate program and data space. IOWs, d(PC) is a program (code) access and as such *should not* create write cycles. It is a restriction that comes from a higher design principle, and such a principle  as leading tradition should hopefully be follwed.  
Quote from: matthey;787026
I doubt compilers would bother with creating a new model like small data or small code but it should be possible to make tiny programs a little smaller and more efficient with PC relative writes opened.
Question is: Is it worth the potential incompatibilty, i.e. forking the 68K ISA for such little returns? An ISA fork always creates problems in software support and Os support, users installing the wrong code on the wrong machine, not knowing what they have... I would do that only for features that are "worth the investment", and not for minor stuff like that.  
Quote from: matthey;787026
I would be willing to go along with whatever makes the 68k most acceptable.
Acceptable for whom and for what? I mean, how many users of the ISA will be there? Essentially, only a hand full of Amiga folks.    
Quote from: matthey;787026
The ColdFire MVS and MVZ instructions can be used very effectively by compilers and peephole optimizing assemblers (an important considerations of an ISA). The support is already available (ready to turn on) as most compilers share the same 68k and CF backend. I'm confident that Frank Wille could have support working in vasm already with a partial benefit in a matter of hours. Turning on the backend CF generation would be a little more work but requires more testing. Sure, it's not going to make a major difference but few integer ISA changes will (exceptions improve branch performance). The applications are obvious enough. Look at the code your layers.library produces and see how many places the MVS and MVZ instructions can be used. The intuition.library is another example where these instructions would be very useful. Of course, the gain is probably only going to be a few percent in performance and code density but it's easy as compilers can use it. Some code would barely use them at all though.
But look, once again: The improved performance by these instructions is rather minimal. The gain in code density is minimal. Is this worth forking the ISA? I really doubt it.  Or rather, put in another way: Would anyone be willing to compile a software library in two versions, one with and one without the added instructions, and support both of them, in a "market" as small as the Amiga? I personally would not. So the "cost" is here the support a software vendor has to pay (for users that use the wrong code), and the benefit is "almost not measurable".    
Quote from: matthey;787026
I'm surprised you never got into compilers. Your assumptions may be true most of the time but sometimes the 68020 addressing modes and ISA changes do make a big difference. For example, you say the 64 bit multiplication instructions are rare and they are for SAS/C but GCC has been using them since the '90s to convert division by a constant into a multiplication. Simple code like the following compiled for the 68020 with GCC will generate a 64 bit integer instruction.
All nice, but again, see above... Different story. Back then, when Motorola introduced the 68020 ISA, the market was active, and market participants had enough power and interest to invest some of their product development resources into porting to the new features. This is no longer the case nowadays. We are talking about a different situation. A small hobby project like this does not have the power to gain sufficient attraction to make it worthwhile.

Probably, allow me to tell you another story: There is another (older) hobby platform, the Atari 8-bits. There were a couple of user-made extensions using the (backwards compatible) 65816 (or so), a backwards compatible extension of the 6502 in this machine. They also added instructions, 16 bit registers... all nice. Never went anywhere, no applications...

You do not get new applications by offering a 64-bit multiply or a "move byte extended" instruction. You get new applications by identifying current holes in the ISA that enable "killer features" and that address clearly identified performance bottlenecks. a "MOVEZ" does not fill a single hole. It's a nice instruction, but only that. Nice.  
Quote from: matthey;787026
The GCC optimization saves quite a few cycles. The 68060 ISA designers failed to recognize that GCC was already using this effectively.  
A 64-bit multiply can, at some point, help a lot with performance (I'm talking about the quantizer in many video/audio/picture codecs, been there, done that). So yes, that is a useful addition. Now, where is where is the bottleneck for other additions?    
Quote from: matthey;787026
The current vclib is compiled only for the 68000 like you think is good enough for your programs.
68K is good enough in *most* cases. I may, do and will, use in some cases using extensions from the 68020 extended ISA. Rarely, but sometimes. I would probably never compile an entire program for the 68020+ only. I would probably provide, for some applications, specially tuned versions of some critical routines that are hand-optimized, and then add a dispatcher in the code itself. This way, you create a robust application users do not have to worry about which version they have to install.

The Amiga "market" is too small to support "68020+" only programs, leave alone "Apollo only" applications. On more active markets, that is an option, but not here.  
Quote from: matthey;787026
It is important to consider what the compiler developers think. They know what they need and can use. They should be part of the process of ISA development but the hardware developers (or should we say Gunnar) dictates what they will get.
I don't object, but in the end, also the compiler writers should understand the user base is. Does it even make sense to support an "Apollo tuned" compiler given the small user basis? As said, I as a developer would probably not care, except for specialized instructions or heavy-duty code, and in that case, I would probably not use a compiler in first place.  
Quote from: matthey;787026
We can see that the ISA creation process has become secretive as can be seen by Gunnar refusing to answer questions in public (I showed how it is possible to mark an ISA with a disclaimer saying that it is for evaluation and subject to change). I tried to create an open ISA early for debate to try to avoid exactly these types of problems but most of the feedback I got was "there is no need yet". Even if my foresight is better than most people's hind sight, it wouldn't do me any good because nobody listens to me no matter how right I am. The truth doesn't seem to matter anymore.


I personally had no problem when talking to Gunnar. But in the end, it's his time he is investing, so it is his project. I neither tell you how to write the compiler, or support libraries for the compiler. All I can ask is to get in contact with him. I don't need to agree with him. I can only give hints, ehem, with my, ehem, usual "charm".
 

Offline ferrellsl

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #189 on: March 30, 2015, 10:06:30 PM »
Quote from: wawrzon;787034
you obviously dont know, that thor collaborates with gunnar on the project and contributes to it as he did to natami.

You are correct and I apologize to Thomas for sounding harsh.  It's clear now that he wants to avoid the same mistakes that were made in past regarding the progression of 68K processors and the Amiga.  But these problems are not restricted to Amigas/68K systems at all.  These problems are inherent to any CPU and its forward progression.  Intel and the x86 world struggled with these same issues back in the day, especially when the 80286 and 80386 lines were released.  The same problems also occurred when programmers wrote x86 code that required an FPU, but an FPU wasn't present.  Back in the day, an FPU for an x86 system was an expensive option. The point I'm trying to make is that yes, there are pitfalls, but they aren't show-stoppers.  And the beauty of an FPGA based 68K system is that it can be patched at will.  If problems are found, they can easily be addressed and the FPGA updated with a new flash.  No need to run around with our hair on fire.  If Gunnar was committing his design to hard silicon, then yes, I'd understand the concern, but he's using an FPGA for good reason.
« Last Edit: March 31, 2015, 12:12:31 AM by ferrellsl »
 

Offline kolla

Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #190 on: March 31, 2015, 12:18:28 AM »
MacOS in emulation on Phoenix/Apollo, it's like Schrödinger's cat it seems. I'd love to see examples of applications that truly benefit from a fast CPU used with Phoenix, applications like Lightwave 3D, DPaint, Brilliance, Imagine, World Constction Set etc. I seriously do not care about Doom clones _at all_, nor do I care about WHDLoad of games that already work perfectly fine on unexpanded A500. There is one video on Youtube showing X-DVE on Vampire600, and it is totally unimpressive.
« Last Edit: March 31, 2015, 12:28:10 AM by kolla »
B5D6A1D019D5D45BCC56F4782AC220D8B3E2A6CC
---
A3000/060CSPPC+CVPPC/128MB + 256MB BigRAM/Deneb USB
A4000/CS060/Mediator4000Di/Voodoo5/128MB
A1200/Blz1260/IndyAGA/192MB
A1200/Blz1260/64MB
A1200/Blz1230III/32MB
A1200/ACA1221
A600/V600v2/Subway USB
A600/Apollo630/32MB
A600/A6095
CD32/SX32/32MB/Plipbox
CD32/TF328
A500/V500v2
A500/MTec520
CDTV
MiSTer, MiST, FleaFPGAs and original Minimig
Peg1, SAM440 and Mac minis with MorphOS
 

Offline ElPolloDiabl

  • Hero Member
  • *****
  • Join Date: May 2009
  • Posts: 1702
    • Show only replies by ElPolloDiabl
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #191 on: March 31, 2015, 02:42:39 AM »
@above
There isn't really an Amiga market. There would be about three programs that you might actually purchase. The rest would be free utilities.
Go Go Gadget Signature!
 

Offline Lurch

  • Lifetime Member
  • Hero Member
  • *****
  • Join Date: Dec 2003
  • Posts: 1716
    • Show only replies by Lurch
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #192 on: March 31, 2015, 07:37:39 AM »
Think it's just time to cut loose and go all out none of this half arsed stuff. It's 2015 time for the MHz to hit 3 figures and then maybe new software and games will start making an appearance.

Just look at all the new games that are released on EAB because of the new cards produced by Jens being readily available.

Again ready to get involved and see where this goes :-)

Oh have a Rev6 A500 and another A500 plus on it's way :-)
-=[LurcH]=-
A500 Plus Black 030@40MHz 128MB | A1200T 060@80MHz 320MB | Pegasos II G4@1GHz 1GB  | Amiga Future Sub
 

Offline xboxOwn

  • Jr. Member
  • **
  • Join Date: Mar 2015
  • Posts: 97
    • Show only replies by xboxOwn
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #193 on: March 31, 2015, 07:41:35 AM »
Quote from: Lurch;787052
Think it's just time to cut loose and go all out none of this half arsed stuff. It's 2015 time for the MHz to hit 3 figures and then maybe new software and games will start making an appearance.

Just look at all the new games that are released on EAB because of the new cards produced by Jens being readily available.

Again ready to get involved and see where this goes :-)

Oh have a Rev6 A500 and another A500 plus on it's way :-)

Honestly, I do not mind that after the release of Vampire XXX series that starting from that time and going on all games demand the new CPU even if it is not pure 68k anymore.

I have no objection to that at all. I just want to see new games coming to my A500 again. I...I want to BUY games for my A500. I want that box again, that manual again, that disk again, that price tag again. I spend 70 dollars on PS 4 games...you don't think I can cough up 40 bucks for an Amiga game?? I will treat my A500 as a console that is all. Another console to buy games for it.

I hope this dream actually becomes reality.
 

Offline Lurch

  • Lifetime Member
  • Hero Member
  • *****
  • Join Date: Dec 2003
  • Posts: 1716
    • Show only replies by Lurch
Re: in case you are interested to test new fpga accelerators for a600/a500
« Reply #194 from previous page: March 31, 2015, 07:49:37 AM »
Quote from: xboxOwn;787053

I just want to see new games coming to my A500 again. I...I want to BUY games for my A500. I want that box again, that manual again, that disk again, that price tag again. I spend 70 dollars on PS 4 games...you don't think I can cough up 40 bucks for an Amiga game?? I will treat my A500 as a console that is all. Another console to buy games for it.


I think there is a market for that too, kind of why I got the Amiga Future sub. I forgot how good it was to read a real magazine.

I also think there is room in the market for new games that take advantage of faster CPUs. I would love to see some more taking advantage of the 030s out there.

Even if they are improvements over the old school platformers like ruff'n'tumble, Bubble 'n Squeak, Bubba 'n Stix or The Chaos Engine I/II. Push what the 030 is capable of.

I'd buy an updated version of any of those especially ruff'n'tumble.
-=[LurcH]=-
A500 Plus Black 030@40MHz 128MB | A1200T 060@80MHz 320MB | Pegasos II G4@1GHz 1GB  | Amiga Future Sub