Welcome, Guest. Please login or register.

Author Topic: ATI Radeon (and others) comparison  (Read 7279 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline TheJackalTopic starter

  • Jr. Member
  • **
  • Join Date: Oct 2003
  • Posts: 95
    • Show only replies by TheJackal
Re: ATI Radeon (and others) comparison
« Reply #14 on: October 30, 2003, 10:22:51 AM »
Generally modern gfx cards do transform and lighting in hardware. Thus they have the vertex/index buffers (the geometry) and textures all loaded up into the gfx cards vram at start. Thus the bus only has to cope with instructions on what to do with the geometry, not the actual verts as you would if the CPU did the transform and lighting.

Of course if you start doing dynamic geometry textures on the CPU this changes the situation. Although with vertex/pixel shaders you can do a lot on the GPU.
_________________
Any views, opinions, statements or advice in this message are solely those of the author and do not necessarily represent those of any organisation or individuals.

\\"Don\\\'t make me dance,.... You wouldn\\\'t like me when I dance.\\" - The Hulk
 

Offline Cymric

  • Hero Member
  • *****
  • Join Date: Nov 2002
  • Posts: 1031
    • Show only replies by Cymric
Re: ATI Radeon (and others) comparison
« Reply #15 on: October 30, 2003, 11:01:08 AM »
Permit me to refer you all to this link which gives you some idea of what I had in mind. Two remarks: a G4 is not an Athlon 1000, but certainly not an Athlon XP2700+, and there are cases where it really doesn't make much of a difference what CPU you have. Also, you have to realise that a lower performance does not automatically mean 'unusable', 'unworkable' or 'unplayable'. My statement was: the CPU is preventing the card from running at full throttle.

Looking back at the data, I realise I may have to withdraw that statement as the memory bandwidth is much larger in case of the faster Athlon. So to lay the blame entirely at the feet of the CPU is probably not supported by the data, even though the article does not mention it. However, something is holding the card back for sure. Since the AmigaOne does not have the bandwidth of the faster test system, I will uphold my opinion it is not necessary (nor financially wise) to plug in the fastest Radeon, especially since it will take a good while before games appear which tax the hardware to its limits.

On a side note, good developers will strive to minimize AGP traffic once the program is running, and utilise every last byte of fast RAM on the card before falling back to much slower main memory. Therefore AGP 8x (or even 16x) doesn't mean an awful lot if the only intensive traffic across the bus is during the initialisation stage. AGP 8x and 16x are marketing tricks.
Some people say that cats are sneaky, evil and cruel. True, and they have many other fine qualities as well.
 

Offline SHADES

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 355
  • Country: au
    • Show only replies by SHADES
Re: ATI Radeon (and others) comparison
« Reply #16 on: October 30, 2003, 11:25:25 AM »
Thanks Cymric, you have explained my point better than I am able to put accross. This is exactly the point I was trying to make earlier, that bandwidth is a much larger contributing factor with modern systems ) as long as we look at standards of moving the data around, mainly AGP.
To pass blame that the G4 CPU is to blame for the bottleneck is not entirly true and an unfair statement to make for the G4. Yes CPU speed makes a difference, but it's subtle to that of the restricted data paths that different platforms can have.  Like having an ATi 9800 on a PCI  33Mhz card, or even  A1s SDRAM instead of DDR.

I do however  have to dissagree with your AGP statement about being a marketing scheme.  Yes it sells hardware but there's a bit more to it than that. As I see it FSB rates could also be labled as a marketing schemes but is now proven to increase perfomance, so much so that Athlon have done away with the FSB and gon on die. with their latest offering.

AGP 8x is now standard for 3.0 version of AGP. AGP 2x and 4x have problems with memory bandwith issues.  there is a nice but brief articel on AGP and it's downfalls at
http://www.devhardware.com/hardware/video-cards/212,1/
hopefully wil explain how AGP3.0 or 8x has tried to circumvent those bandwisth issues of the prevoius spec and give insite in what should follow. The 8x? is just refering to the "pump" clock signal / trigger.
It's not the question, that is the problem, it is the problem, that is the question.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #17 on: October 30, 2003, 10:25:26 PM »
Quote

Recently I saw 350Mhz machine tested against multi Ghz machine. the 350Mhz machine (when given a good GFX card) could reach very playable framerates, even though the multi Ghz machine had more FPS

That would be 3DMarks benchmark (synthetic)  vs UT2003’s benchmark (a real life game) as I recall.
The real life gaming benchmarks pointed to FPS scaling with the CPU scaling**.

350Mhz X86 machine was powered by Intel’s Pentium II (presumedly supported by the Intel’s 440BX chipset).

**Reference
http://www.anandtech.com/video/showdoc.html?i=1650&p=1

PS; Note that ATI Radeon 8500 flatten out @1.2GHz.
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #18 on: October 30, 2003, 10:34:43 PM »
 
Quote
but is now proven to increase perfomance, so much so that Athlon have done away with the FSB and gon on die. with their latest offering.


Actually, AMD’s K8 Athlon 64/Athlon FX/Opteron still has Northbridge <> CPU FSB which is highlighted via AMD's own document materials.

Reference
http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_9485_9488^9494,00.html

"Integrated Northbridge |   Yes, 128-bit data path @ CPU core frequency". - Advanced Micro Devices, Inc.

 
Quote
To pass blame that the G4 CPU is to blame for the bottleneck is not entirly true and an unfair statement to make for the G4.


Note that when one discusses about the CPU, it’s Northbridge <> CPU FSB limitations must be taken into an account since this is part of the CPU’s characteristics.
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #19 on: October 31, 2003, 12:22:47 AM »
Quote
On a side note, good developers will strive to minimize AGP traffic once the program is running, and utilise every last byte of fast RAM on the card before falling back to much slower main memory.

One should be realistic than idealistic in regards to programming scenarios.  

Quote

Therefore AGP 8x (or even 16x) doesn't mean an awful lot if the only intensive traffic across the
bus is during the initialisation stage.

AGP8X and AGP texture memory is use  as an insurance for running future and more complex games.

When GPU’s memory has been exhausted, faster AGP bus benefits AGP texture fetching features.  The benefits of AGP can be maximise when has it’s own allocated bandwidth; as in nForce2’s 128bit wide bus i.e. 64bit for CPU and rest for AGP/IGP/APU/’etc’ consumption.
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline SHADES

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 355
  • Country: au
    • Show only replies by SHADES
Re: ATI Radeon (and others) comparison
« Reply #20 on: October 31, 2003, 01:07:52 AM »

What's your point? Is'nt this what I was trying to say anyway? I thni my previous points were to that not too many new computers or graphics cards use the bandwidth of 8x at this point, but towards the future, they probably will. Or are you agreeing with me now. lol

A p2 example has inferior data flow ability to that of today's memory arcitecture and a p2 is hardly up to sepc of a G4 processor. I fail to see the relevence there.
Want to see the specs on my AMIGA 500?? lol

As for the Athlon, I was mearly stating that FSB speed has increased and has improved throughput to memory and thus speeded up applications and games whatever needs shunting around ? isn't this what you have just agreed with me? the p2 is not very fast at doing that.  It would be a bottleneck here as is EDO ram to SDRAM?

I don't get it.

 Athlon 64 FX has a dual-channel memory interface as well as 3 "HyperTransport" ports.  Or so they were called a wile ago. Terminology changes like "Hammer" as a name, now Athlon and Opteron.  or perhaps Northbridge lol

The standard Athlon 64 only has 1single channel memory interface and 1 HyperTransport port.

Nicknamed "Hammer" but now called opteron multi-processor systems includes local memory on elach CPU so that the other CPUs can access the mmory of these CPUs via the HyperTransport bus.  Whcih is great for any multi cpu enviroment ie linux dual CPU core mind you it has to be specificly compiled to use that.

Initially, only the high end vrsion of the Hammer, OR "Opteron", will be equipped with two 72-bit wide DDR SDRAM channels. This can lead to having a total of eight DIMM slots which means each processor to address 8 GB.

The dualchannel intrface of the Athlon 64 FX-51 offers a theorecticle memory bandwidth of 6.4 GB/s .
Mind youl, the integration of the memory controller on the die can be considered to be kind of a  limitation on flexibility, as it's not going to be expanded by a purchase of a new chipset (motherboard) to run with.

If it's still called "Northbridge" so be it. This was to move away from 4 pumping or 8x pumping FSB like Intel have done. Anything On Die is going to be faster, just less upgradable without getting a new chip. CPU

Still slinging around facts allready wel documented about doesn't cahnge the fact that a G4 still has plenty of muscle and with the right arcitecture around it to compliment what it is capable of would make it not a problem to run even the high end graphics offerings.

>>
AGP8X and AGP texture memory is use as an insurance for running future and more complex games.

When GPU’s memory has been exhausted, faster AGP bus benefits AGP texture fetching features. The benefits of AGP can be maximise when has it’s own allocated bandwidth; as in nForce2’s 128bit wide bus i.e. 64bit for CPU and rest for AGP/IGP/APU/’etc’ consumption.
AGP

That's entirly my point.  If there is a bigger or faster route to memory for getting textures etc then the GPU will be able to process info faster, it's not left to the CPU. Fast Writes in AGP designed by Nvidia try to allow direct access to memory to be able to do just that. Directly access memory for textures etc.. No matter if it's a G4 or P3 or whatever flavour, so long as its supported.
AGP3.0 is trying to impliment changes as to not starve the  the graphics crads of thrughput to memory and gpus designed to remove CPU intervention and processing.
You know Grapics chips for graphics, sound for sound etc..
G4 for g4 lol

Ok, enough.
It's not the question, that is the problem, it is the problem, that is the question.
 

Offline DonnyEMU

  • Hero Member
  • *****
  • Join Date: Sep 2002
  • Posts: 650
    • Show only replies by DonnyEMU
    • http://blog.donburnett.com
Re: ATI Radeon (and others) comparison
« Reply #21 on: October 31, 2003, 03:59:09 AM »
A point to make here about graphics cards that is important. I spent the summer learning about CGShaders and HLSL shaders..

What sets the graphics cards apart is what their GPU can do with shaders. Shaders on modern graphics cards (Radeon 9800 or greater and GeForceFX 5x00 and above) is that they contain their own vector processor units.

Different levels of graphics cards have different Hardware shader capabilities and the better your hardware GPU is the better rendering and FX will be in games. The GeForce 3 TI were the first with hardware pixel/vertex shaders with Nvidia hardware (and that was limited).

The latest ATI and nVidia cards have great GPU hardware and shader technology (think if of it as a CPU just for rendering FX).. Having texture memory and just supporting hardware lighting and texturing are trivial compared to the importance of what your shaders can do and how good the graphics look.

Software these days usually supports 3 levels of shader capability number 1 2 and 3 a level three is just amazing..

If anyone wants to talk HLSL or pixel and vertex shaders offline just email me I love this new stuff. Amiga needs to support this technology in a better way than just OpenGL 2.0..


As far as bandwidth goes, have you all checked out PCI Express?
======================================
Don Burnett Developer
http://blog.donburnett.com
don@donburnett.com
======================================
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #22 on: October 31, 2003, 05:50:18 AM »
Quote
What's your point? Is'nt this what I was trying to say anyway? I thni my previous points were to that not too many new computers or graphics cards use the bandwidth of 8x at this point,

The importance of AGP8X is greater in the mainstream/ GPU solutions** i.e.
1. nVidia's IGP (e.g. the integrated Geforce 4 MX 420 in nForce2 chipset; uses 128bit shared memory architecture).
2. Limited onboard graphic memory e.g Geforce 4 TI 4200 64MB.

**For fleet OEM PCs.

Quote
As for the Athlon, I was mearly stating that FSB speed has increased and has improved throughput to memory and thus speeded up applications and games whatever needs shunting around ? isn't this what you have just agreed with me?

I don’t agree with "no FSB" statements.

Quote
Nicknamed "Hammer" but now called opteron multi-processor systems includes local memory on elach CPU so that the other CPUs can access the mmory of these CPUs via the HyperTransport bus.

“Sledge Hammer” refers to the current Socket 940 K8 core. The "Claw Hammer" refers  to the current Socket 754 K8 core.

Quote
Mind youl, the integration of the memory controller on the die can be considered to be kind of a limitation on flexibility, as it's not going to be expanded by a purchase of a new chipset (motherboard) to run with.

Slightly off topic, with K8, memory controller is upgraded with the CPU.  ASUS nForce3 150 (Socket 940**) supports PC3200 registered ECC RAM via Athlon FX 51. Note that Opteron 146 also supports PC3200 registered ECC RAM.

AMD has stated that motherboard vendors can turn off the on-die memory controller and go for the traditional CPU <> External Northbridge <> Southbridge relationship.  

From my personal experience, I have upgraded  CPUs more than motherboards i.e. I have unused Athlon Tbird @1.4Ghz**.

**Waiting for K7 motherboard hand-me down.

Quote
Still slinging around facts allready wel documented about doesn't cahnge the fact that a G4 still has plenty of muscle and with the right arcitecture around it to compliment what it is capable of would make it not a problem to run even the high end graphics offerings.

The PowerMac G4 @1.4Ghz with ATI 9x00 VPU will show you the results of such a setup since it's the closest to the ideal A1XE G4 @1.0Ghz with ATI 9x00 VPU.
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #23 on: October 31, 2003, 05:56:51 AM »
Quote
What sets the graphics cards apart is what their GPU can do with shaders. Shaders on modern graphics cards (Radeon 9800 or greater and GeForceFX 5x00 and above) is that they contain their own vector processor units.

What about the other ATI's DirectX 9 class products (e.g. 9700 Pro, 9600XT)?

Quote
As far as bandwidth goes, have you all checked out PCI Express?

I didn’t say AGP8X be the end of the evolution...
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline SHADES

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 355
  • Country: au
    • Show only replies by SHADES
Re: ATI Radeon (and others) comparison
« Reply #24 on: October 31, 2003, 06:41:37 AM »
>>
The importance of AGP8X is greater in the mainstream/ GPU solutions** i.e.
1. nVidia's IGP (e.g. the integrated Geforce 4 MX 420 in nForce2 chipset; uses 128bit shared memory architecture).
2. Limited onboard graphic memory e.g Geforce 4 TI 4200 64MB.

**For fleet OEM PCs.

What's that got to do with anything?? I never said AGP 8x wasn't of importance, I even gave links to info on it LOL

>>
I don’t agree with "no FSB" statements.

You don't have to, thats fine. All I said was that speed improvement to HW of this nature has impacted on bandwidth, hence speed in a major way.  You want to call it a FSB, go for it. I like memory controller. But whtever.


>>Slightly off topic, with K8, memory controller is upgraded with the CPU.

err.....didn't I just say that? ;-)

>>ASUS nForce3 150 (Socket 940**) supports PC3200 registered ECC RAM via Athlon FX 51. Note that Opteron 146 also supports PC3200 registered ECC RAM.

So??? My Asus kt133 supports IDE raid. :)
Error Correction Circutry <- great in servers for redundancy, a bit slower, but not too important in this discussion.

>>AMD has stated that motherboard vendors can turn off the on-die memory controller and go for the traditional CPU <> External Northbridge <> Southbridge relationship.

From my personal experience, I have upgraded CPUs more than motherboards i.e. I have unused Athlon Tbird @1.4Ghz**.

Me too, but it's not important. You can have as many memory contollers as you like in the chain, why would you bother?? On die is always going to be quicker and you can't graft more bus lines to a chip once its made, but yes, of course you can. Why would you make it incompatible with current offereings. That's not market smart.

To re design North bridge chips without controller etc and how the communicate with the rest of the moterboards would take lots of resources, designing and testing.  What they have done by saying this is that manufacturers will not need to allocate heaps of r&d re-inventing their chip designes just to run with these new CPUs AMD is offering.

>>The PowerMac G4 @1.4Ghz with ATI 9x00 VPU will show you the results of such a setup since it's the closest to the ideal A1XE G4 @1.0Ghz with ATI 9x00 VPU.

Apple are wll known for doing things "their way"
I would like to see another offereing with standard VIA or Nforce like chipset with an AGP3.0 spec.
What's the spec on that PC with regards to chipset and AGP offereing.

My old Pentium 233 has an ATI radion 8500 in it on AGP  bus, what's the point?
What you can't design a newer chipset around a G4 CPU now? just cause AMD haven't or Nvidia doesn't mean it can't be done. What your saying that Apple got it right? how old is their mainboard? If there was a more of a market then Asus gigabyte etc would all bring out their flavours.

That doesn't make the G4 a slouch though.  The G4 is a very powerful chip and quit easily able to play on a nice fast system bus. Cramp it up on a slow PCI system with SDRAM and a slow memory controller and you will have a slower system. Man it's not rocket science.

It's not the question, that is the problem, it is the problem, that is the question.
 

Offline Damion

Re: ATI Radeon (and others) comparison
« Reply #25 on: October 31, 2003, 07:46:39 AM »
The G4 by "itself" is definately still a good
chip (according to distributed.net benchmarks),
especially when not considering cost. The
problem seems to be that the 'G4' is usually
coupled with less than ideal hardware..
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #26 on: October 31, 2003, 07:55:17 AM »
Quote
I never said AGP 8x wasn't of importance, I even gave links to info on it LOL

"Importance" was referring to "not too many new computers or graphics cards use the bandwidth of 8x at this point".

Quote
I thni my previous points were to that not too many new computers or graphics cards use the bandwidth of 8x at this point,

There are many computers that use AGP8X bandwidth due;
1. nVidia's IGP (e.g. the integrated Geforce 4 MX 420 in nForce2 chipset; uses 128bit shared memory architecture).
2. Limited onboard graphic memory e.g Geforce 4 TI 4200 64MB.

Quote
On die is always going to be quicker and you can't graft more bus lines to a chip once its made,

Why would you graft more bus lines? Note that you still  have a hyper-transport link @DDR1600.

Quote
Apple are wll known for doing things "their way"

Usually, one could re-flash the x86 gfx card’s ROM with a Mac version...

Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline SHADES

  • Sr. Member
  • ****
  • Join Date: Apr 2002
  • Posts: 355
  • Country: au
    • Show only replies by SHADES
Re: ATI Radeon (and others) comparison
« Reply #27 on: October 31, 2003, 11:18:26 AM »
>>"Importance" was referring to "not too many new computers or graphics cards use the bandwidth of 8x at this point".
    thni my previous points were to that not too many new computers or graphics cards use the bandwidth of 8x at this point,
There are many computers that use AGP8X bandwidth due;
1. nVidia's IGP (e.g. the integrated Geforce 4 MX 420 in nForce2 chipset; uses 128bit shared memory architecture).
2. Limited onboard graphic memory e.g Geforce 4 TI 4200 64MB.

That's 1 chip manufacturer.  Read the first sentance, where does it say none? I think it says not many. It's still a very new standard. How many computers do not use 8x...........:)

>>   On die is always going to be quicker and you can't graft more bus lines to a chip once its made,  Is my comment to your response of   "AMD has stated that motherboard vendors can turn off the on-die memory controller and go for the traditional CPU <> External Northbridge <> Southbridg......."

>>Why would you graft more bus lines? Note that you still have a hyper-transport link @DDR1600.

Huh??? you can't graft anything! what the??? your the one who said.."AMD has stated that motherboard vendors can turn off the on-die memory controller and go for the traditional CPU <> External Northbridge <> Southbridge relationship."
I was the one that said this was so chip vendors don't need to toatly re design their chips without the memory mangement features if they wanted to do it themselves.  On die is gong to be faster that's all I replied with and gave my reasons as such..

>>
Usually, one could re-flash the x86 gfx card’s ROM with a Mac version...

Yeah.. cause that's going to change all the hardware too ..

nevermind.. this is trolling and useless . C ya.
It's not the question, that is the problem, it is the problem, that is the question.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #28 on: November 01, 2003, 10:35:38 AM »
Quote
That's 1 chip manufacturer. Read the first sentance, where does it say none? I think it says not many. It's still a very new standard. How many computers do not use 8x...........:)

In regards to "many"; Are you claiming ATI’s Radeon 9800/XT and NVidia’s Geforce FX 5800/5900/5950 as their mass market products?

To continue...
3. Intel's Extreme Graphics.
4. VIA 's Integrated Savage GPU.
5. ATI's Mobile integrated Radeon GPU.
6. SIS’s Xaber with its limit onboard memory.

The results without AGP8X refer to
http://www.theinquirer.net/?article=12428
http://www.cadonline.com/reviews/hardware/1103workstation/

Rankings;
1. Xi Computer's Xi MTower 2P64 Opteron (with NVIDIA Quadro FX 3000)  system
2. Polywell Xeon system (with NVIDIA Quadro FX 3000)  
3. Monarch’s Opteron system
4. ‘etc’
10. Lucky last, an Athlon MP system due to missing AGP8X capabilities.

Quote
The Athlon MP system came in a very poor last, but it was the only system that didn't support AGP 8x graphics.
- www.theinquirer.net 2003
NVidia's Quadro FX 3000 didn't rescue the Athlon MP system from being last.
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.
 

Offline Hammer

  • Hero Member
  • *****
  • Join Date: Mar 2002
  • Posts: 1996
  • Country: 00
    • Show only replies by Hammer
Re: ATI Radeon (and others) comparison
« Reply #29 from previous page: November 01, 2003, 10:58:22 AM »
Quote
You don't have to, thats fine. All I said was that speed improvement to HW of this nature has impacted on bandwidth, hence speed in a major way. You want to call it a FSB, go for it. I like memory controller. But whtever.

In addition to my previous post, one shouldn’t decouple CPU <> Northbridge bandwidth issues with the CPU core. The PowerPC G4 refers to the chip and its inheritance limitations i.e. the G4 based solution.

Quote
Apple are wll known for doing things "their way"
 

Such statements are not substantial enough to warrant a dismissal.  Define doing things "their way".  Note that Apple boxes are running one of the latest games (e.g. UT2003).

Are you implying A1XE @1Ghz (with ATI Radeon 9800) will beat  a PowerMac G4@1.4Ghz (with ATI Radeon 9800)?
Amiga 1200 PiStorm32-Emu68-RPI 4B 4GB.
Ryzen 9 7900X, DDR5-6000 64 GB, RTX 4080 16 GB PC.