lou_dias wrote:
alexh wrote:
AJCopland wrote:
the Wii is at least as powerful as the XBOX - minus the programmable shaders but with a much more powerful fixed function pipeline. Gives it a teeny bit of an edge in terms of throughput compared to the XBOX plus the cpus a lil' quicker.
All not true. The main 730MHz PowerPC CPU is (according the games coders) less powerful than the 733MHz Celeron in the XBOX1 (at least at the API level) and the GPU has lower throughput. Add the lack of HD support (XBOX-1 will happily do 720p and at a push 1080i) and the Wii is not a true 7th gen console.
LOL! The GC's 487.5MHz cpu had more cache than that Celeron/P3 hybrid and executed more instructions per clock cycle and had faster memory access with less latency.
.
G3 (front-end) issues instructions at a rate of 3 instructions per cycle. Gekko can support 64bit (2x32bit) SIMD instruction issue. Gekko's SIMD is closer to AMD's 64bit (2X 32bit) 3DNow non-Pro instruction set.
Pentium III (front-end) issues instructions at a rate of 3 instructions per cycle. Pentium III can support 128bit (4x32bit) SIMD instruction issue.
Also the GC's Star Wars: Rogue Squadron 3 - Rebel Assault from Factor 5 put out 20,000,000 polygons in real time and the best Xbox game did 18,000,000 polygons. The only thing that held the GC back was RAM(24MB of 1T-SRAM and 16MB of SDRAM + 3MB EFB). Now the Wii's gpu is internally 3x faster than the GC's and the cpu is 1.5x faster than the GC's. Also the Wii has 2 separate banks of memory(64MB of GDDR2 and 24MB of 1T-SRAM, this 24MB is on the gpu die) that can be accessed by the cpu and gpu for a total of 88MB, in addition the GC/Wii gpu has 3+MB of EFB+texture cache akin the the 360's 10MB. The Xbox only had 64MB shared for everything.
.
Due to CISC nature, the X86 has instruction compression during bus transfers.
So the Wii outperforms the original Xbox any way you slice it. Most developers aren't familiar with fixed-function T&L operations. The GC's cpu can apply 8 texture operations on 4 pipelines in a single pass. The Wii has 8 pipelines (2x GC's). In traditional gpu's the texture has to be sent through again to apply another operation, not on the Wii/GC..and most developers didn't take advantage of this feature. That's why you only see PS2-level graphics from most developers.
Also, the Wii has an ARM core in it's Hollywood packaging which mostly handles encryption and de/compression but probably other things too(such as downclocking the main cpu to GC levels and controlling memory access).
Finally, now that the Wii has been fully hacked for homebrew, you will see it's full potential soon enough...
Microsoft published "max" throughput numbers. Per my redline, my car can do 210Mph, however in the real world it stuggles to get over 140mph, please don't believe the hype... Just look at Sony and MS's spec reports from E3 of 2005 for the PS3 and 360...a joke. Non of those are in-game, real world numbers.
Nintedo announce the GC could do 12,000,000 mip-mapped texture polygons in realtime, meanwhile Sony says the PS2 could peak at 70,000,000 and MS claimed 120 to 130 million... Meanwhile Factor 5 pumps out 20,000,000 realtime textured polygons, the PS2 capped at 10-12 million and the Xbox at 18 million. These #'s are based on actual games, not theoretical performance capabilities.
XBOX games usually comes with programmable vertex and pixel shader effects i.e. enough computation power to run Doom 3 and Farcry.
PowerPC "Gekko" CPU uses 64bit (2X 32bit )SIMD. Pentium III has 64bit (2X32bit integer) MMX SIMD(on X87 registers) and 128bit (4X 32bit integer/FP) SSE1 SIMD(on XMM registers).