It was a beneficial architecture when it was introduced.
So it seemed, at least. There is nothing completely wrong with the observation that a simpler CPU could run faster, but what Mot and IBM underestimated is that bandwidth did not grew proportional to the clock speed of the CPU, and that compiler performance and optimization did not improve as much as they hoped for. Intel's IA64 is a failure as well, for related reasons. Nowadays, the CPU - due to increased complexity - can perform the "compilation" of "abstract code" (say, x86 assembly) to its "raw form" much better (due to code and execution statistics) than a static compiler analysis can, and shorter instructions also help to cut down bandwidth requirements. All, of course, if you have the power to drive this complex machinery.
not a fashion statement at all.
Oh, 'mon. Back in those days, it was "RISC" here, "RISC" there, all around. If you believe that engineering does not have fashion movements, you've probably not yet observed one. Currently, it's the "IoT" business and "Cloud everything" all around. Back then, "RISC" was the thing to do.
What happened over time however is that the percentage of a chip that is risc vs cisc became so small that for computers the benefit is towards the chip that has market traction, which was x86 and is now x64.
Which is, deep inside, also a RISC design, but (wisely) with a backwards compatible high-level just-in-time CISC compiler. (-: Yes, that means added complexity.
The phone market has proved that when there are other factors in play (like power usage and licensing the cpu to use in a SOC) that a RISC cpu like Arm can still sell in large numbers.
Oh, sure. But there is also a reason why ARM has thumb code, you know? (-;
The Pentium Pro was Intel's first chip where the code was translated into another form, which is effectively executed by a RISC cpu. The translated code is cached, so loops are fast etc.
Clearly. But it avoids the problem of the PPC RISC design of overly long instructions and low code density, but uses a "abbreviated high-level" assembler syntax.
I'm not talking about making it as fast as a modern CPU. I'm talking about making it fast enough to run late 90's Amiga PPC software. That sounds pretty retro to me.
But, unlike the 68K, you *can* buy fast PPC chips. So, again, what's the point? Or, what's the point with PPC on Amiga anyhow? As said, the software library is not exactly "huge".
Arguing to use RTG graphics instead of custom chip graphics seems a little odd, on a thread about vampire which has it's own custom graphics.
Who argues against RTG graphics?
Motorola wasn't there to help fight off Intel, they were there to make money. Apple had decided to ditch the 680x0 cpu's, but aquarius and all the other internal projects had failed. Motorola's own RISC CPU (the 88000) was a disaster, so after IBM contacted Apple and got them excited about POWER then joining up with IBM was Motorola's only chance to hold onto some of the pie.
At the time back then, the choice made of course sense. I certainly don't argue against it. CISC seemed to run into a dead end, RISC was a fashionable new toy that made huge promises - and all the arguments were also quite reasonable, no question. The problem is just that the development did not quite work out as expected. Memory speed fell behind raw CPU power, bandwidth and compatibility with legacy applications became more important than simplicity of the CPU design, so RISC did not met its expectations.
I suspect they regretted choosing PPC for a long time, it was only the Pentium 4 failing that gave them any cause for celebration.
Not only Apple. Also AMD. The Pentium 4 was really a big disaster, more designed by the needs of the marketing department than by smart engineering, driven by the need to sell CPUs by the GHz number on them. This broke down when execution speed hit the brick wall at 4GHz, a speed at which a signal takes several clock cycles to run from one edge of a chip to another... Actually, it was not *that* unexpected as the physical limits were known. I'm not clear which miracle intel was actually hoping for.
When Intel realised they had to do something serious and went back to the Pentium 3 design and improved it to make Intel Core, then there really was no stopping them. The PPC didn't recover and even the PS3/Xbox360 cpu cores aren't that good.
The market was too small to allow improvement, Apple had to pay $$$ for them, and finally, the CPU design also showed its limitations, see above. If bandwidth is limited, many long and simple instructions are not the best choice. ARM targets a completely different market, where raw performance is not important, but performance per Watt is. *There*, a simpler design helps that can be upscaled to the requirements of the design. That's something ARM is really good at - customizing CPU cores for specific needs.
But we seem to be going off topic, PPC wasn't a great choice. But it was a choice that Phase 5 made, so it would be nice to be able to run PPC software as well as taking advantage of the new software for vampire.
No, PPC wasn't a great choice, indeed. X86 would have been a much better choice, but a choice that wouldn't have been accepted by users that are driven more by ideology than technology. The x86 chips are probably an unorthogonal mess, but they are still high performing, powerful chips.