Amiga.org
Amiga computer related discussion => Amiga Hardware Issues and discussion => Topic started by: chromozone on April 27, 2003, 01:01:59 AM
-
GDay All
Has any one else heard references to a 1.3G XE?
Not sure but it may even support Dual PPC's.
I'm in Australia & heard it over hear via a supplier.
Can anyone confirm this is a planned product. ?
-
The PPC 970 chip has been announced by IBM. It is expected to be released in 1.3 or 1.8 GHz versions, maybe this year, maybe next. Once it has been available for long enough that small orders like ours can be taken, people like Mai will be able to make some plug-in boards for them.
Meanwhile we have to put up with the poor, old-fashioned ordinary lightning-fast 800 MHz G4. Maybe one day we'll get the special "greased" lightning version.
tony
-
Tony, MAI is a fabless chip designer. They don't even make the chips they design. They're certainly not going to make boards of any kind. :-)
-
@chromozone
There will be a dual G4 module available for the XE later on this year. Hyperion have had one for a little while and partial support for it will be in OS4.
Pictures of the module are available here (http://www.soft3.net/pages/dualcpu_full.php).
I believe the card will be available with processors up to the current 1.3GHz top end CPU's.
-john
-
It will be "on the expensive side" as well :-)
-
It would be nice to have one! :idea:
I want a dual PPC Amiga :-)
-
Don't get your hopes up too much. 800Mhz G4s are rare and expensive as it is, without going into 1.3GHz ones. And getting the chips is only the start - they have to be turned into boards, which isn't going to be easy when they're too expensive but all for a few people to afford.
And when they finally do get them, I suppose people will demand 2x 1.8Ghz ones instead...
-
KennyR:
And when they finally do get them, I suppose people will demand 2x 1.8Ghz ones instead...
;-) Naturally. There never was a future-proof computer, and there never will be.
-
There are a few more things to consider before getting too excited
about dual processor boards.
1. Applications or games have to be written to support dual processors
otherwise there is no benefit.
2. Software written for a single processor will run slower on a dual
processor board.
-
@Billsey:
Mate, you're quite right. I should have said "Mai can design a board."
tony
-
@ Rob
In response to your first point:
If you have two processors, and the operating system has been designed to efficiently use multiple processors (I use the word 'efficiently' because if SMP is implemented badly you may as well throw the dual CPU module out the window!), then one CPU can be doing work while another application (which hasn't been designed to multi-thread) uses the other. The application doesn't ask to use CPU0, it fires up a process, and the kernel decides what CPU is going to work on that process. So while there isn't the advantage to be had of multi-threaded applications being able to use both CPUs at once, the application will still run faster on a dual CPU system.
In response to your second point:
Rubbish. Where did you get that from? I'm not even sure where to begin to correct you on this one, if you explain your logic behind this statement, I might be able to correct that :-)
If an operating system supports multiple processors, then there is an immediate benefit in running multiple processors. The benefit is even greater still if programs running on the multiple CPU system are written to be multi-thread'able. Threads, simply put, are like mini-processes running under the main process. The kernel can, if it wants to, put 8 threads from a process onto 8 CPUs running in the system, and the work effectively gets done up to 8 times quicker than on an equivalent speed but single CPU system.
Another advantage of running a multiple processor system is that if an application decides to saturate one CPU for ages, you've got a spare :-) Yes, a process can saturate both CPUs, but anyway :-)
As an on-the side note to multi-CPU newbies: If you have 2x 800MHz CPUs in a system, it doesn't make it a 1600MHz system!
-
Having said all that, I think most 'average' computer users (not doing anything hugely taxing with their system, just average everyday apps) won't benefit a great deal through upgrading to a dual CPU setup.
The time when you do want to upgrade to a dual CPU setup is when you actually *know* the CPU is a major bottleneck in your trying to do something on the machine. This isn't necessarily as simple as it sounds to work out, as CPU usage can go way up just because other hardware is bottlenecking a process. Disk I/O is a favourite. Given the choice between a dual CPU setup or a single CPU machine with 5 fast hard disks set up to do RAID striped, I'd take the disks :-)
If you think the CPU is the major bottleneck, then using a decent set of performance monitoring tools that can report disk I/O, processor queue length, page faults, etc. is what you need to be sure of your diagnosis of the situation. You need to be able to trace the bottleneck from the beginning to the end of the task being processed.
Applications support is an important factor as well, though for Rob suggested to be a problem, it would have to be the most screwed up application I've ever seen (and I can't think of one doing this) to run slower on a dual CPU system :-)
-
Well, I don't really know too much about this so take my words with a pinch of salt but.
I always thought that you could write the OS to take full advantage of the two processors, so that anything written nto hardware banging, but through the OS, would take advantage as transferred. I know it doesn't work like that in reality, but is this merely because the OSes out there don't really take advantage of the dual processors like they could?
Second, I know that two 800Mhz processors Don't = a 1600Mhz processor, they have overhead, which is what stops them, correct?
Do Mac OS computers split up tasks between the two processors, so while one program runs on one processor, others can use the other one? This osunds really clever to me, but I agree its truly only useful if you do something on par with rendering two scenes at once in a 3D app. (Although I found that even on 1Ghz macs the barrier seems to be transferring data to disk. Never render straight to a zip disk, or floppy. Harddrives run faster, A question I have is, would the faster harddrives out there be the bottle neck nowadays, If you could puttheir performance of data writing into hertz what would it be? RPMs mean nothing compared to the processor... just say if disks are bottle neck, I wonder, because if they are, whats the point of a 2gig processor? (for some apps that write almost constantly))
-
El cheapo IDE Harddrives nowadays are doing about 40..50 MB per second.
-
I'm almost certain that the 1.3Ghz card will be a single CPU card without L3 cache.
But it is almost as sure that it does not become available during this year... we can hope but the economies are against it.
In the end, time (&Alan R.) will tell. :)
Also a dual 750FX @ 1Ghz would be a very interesting (fanless?) product...
-
@ jeffimix
I always thought that you could write the OS to take full advantage of the two processors, so that anything written nto hardware banging, but through the OS, would take advantage as transferred.
Basically yes (but also take into account what I said previously in this thread about applications support). Programs hitting the hardware should be a thing of the past with a decent operating system behind the wheel anyway. You can hardly have an SMP-capable operating system and still allow applications to hit the hardware directly.
Second, I know that two 800Mhz processors Don't = a 1600Mhz processor, they have overhead, which is what stops them, correct?
Not sure what you mean by that. I'll try to explain a little further. Assuming that the 1600MHz processor and the 800MHz processor are made by the same people the same way, and one is simply clocked twice as fast as the other (very theoretical scenario), then you'll have the rough equivalent of the following comparison:
There's a piece of work to be done, and you have the following choices: You can give it to one person, who is especially experienced at doing that work quickly (the equivalent of the 1.6GHz processor), or you can give it to two people who aren't so used to doing it, so if only one of them were doing it, they'd do it at half the pace of the faster person. Now, in the real-life situation, there are many factors involved in finding out which party would get the work done quicker. When two people are working together on something, there can be many bottlenecks to getting the work done quicker. There might only be one pen that they can use, so one would have to wait till the other is finished, as an example :-)
In computing terms, it would be like for example, you having a task, say for example, you have a UNIX-variant box, and you want to compile a hefty program from its source, say for example Mozilla. Now obviously CPU is an important part of the equation to compiling the source quicker, but the process of compiling requires a lot of reading and writing to disk, lots of small files, which means that much of the time the CPU is going to be waiting for that to happen. In which case, a dual CPU machine isn't going to be of much benefit. It's better to improve the compiling process, so that you read a chunk of files from disk, write the compiled chunk to memory, and when you have a reasonable size chunk to write back to disk again, then write it.
Hard disks may be cheap nowadays, and they may claim to be able to do 50MB/sec, but comparing them to the capabilities of RAM throughput or CPU throughput, they're like a modem compared to broadband. *Very* high latency on reading small files, that is why something like WIndows takes much the same time to boot even when you get a faster hard disk. It's only when you do a very drastic comparison, say a pre-UDMA hard disk to post UDMA100, that you'd see the kind of drastic difference you'd prefer to see when reading and writing small files. Look at 99% of the files used by the operating system, even with a bloater like Windows 2000, they're a few megs maximum size, quite frankly, who cares if you can read off a 50MB file in one second, when say ten 5MB files takes you much much longer. Which is why RAID striping makes such a huge difference.
-
I wonder whether it might be an idea if Eyetech were only to sell certain speeds of CPU in dual CPU module form, so something like:
modules for sale:
1x G4 800MHz
2x G4 800MHz
2x G4 1.3GHz
1x G4 1.8GHz
2x G4 2.2GHz
1x G4 2.6GHz
etc
Might work, but what would I know about chip sales :-)
-
mikeymike wrote:
I wonder whether it might be an idea if Eyetech were only to sell certain speeds of CPU in dual CPU module form, so something like:
modules for sale:
1x G4 800MHz
2x G4 800MHz
2x G4 1.3GHz
1x G4 1.8GHz
2x G4 2.2GHz
1x G4 2.6GHz
etc
Might work, but what would I know about chip sales :-)
Ehm, so far I've seen these speeds mentioned by Alan:
1x 800
2x800
1x1300
I would say that means Alan is _way_ ahead of you here.
Basically, since the different speeds of G4 means different chips with different pinouts (unlike for instance a P4, which usually stays the same for a bit longer) it doesn't make sense selling incremental speedups like you do with x86. Because you need a new CPU board for like every other speedup, it's much better to wait another month and skip a generation :-)
That's what I think, anyway. With a headache and just looking in at the forums while trying to do some work. I could be wrong, but I don't care :-)
-
there was an article in micromart that dual g4
should out by chistmas and they where bench testing a A1XE
((This year) I THINK ?)
-
I think that the poor memory bandwith of the Articia will be a major bottleneck in a dual 1.3 GHz G4 system. Or am I wrong?
-
[Re: 133MHz FSB on the XE] Not a *major* bottleneck, but DDR would make the XE a significantly better performer.
-
Although I'm not all that technically savvy, I found these benchmarks (http://www.barefeats.com/pmddr6.html) interesting.
-
Ok, crap implementations of DDR aside, normally it would make a difference :-)
That's not an issue with the processor, but with the chipset or maybe even the RAM.
-
mikeymike wrote:
Ok, crap implementations of DDR aside, normally it would make a difference :-)
PowerPC G3/G4’s FSB design (& it's interface to the outside world) is not aware of DDR technologies (e.g. not like the DEC Alpha AXP/AMD Athlon XP’s EV6 designs). The gain is not as big as in DDR X86/Alpha powered systems.
-
yep, you're totally right. I've been posting too late at night again :-)
-
Bodie wrote:
Although I'm not all that technically savvy, I found these benchmarks (http://www.barefeats.com/pmddr6.html) interesting.
Right, current PPC's does not gain (any) extra speed by using DDR RAM.
It seems that also as L3 cache DDR does not perform much/any better than SDR L3 cache on PPCs.
Only in some ultimately heavy duty use the DDR system RAM has any signifficant effect on PPC systems (simutaneous high PCI&AGP&CPU&HDD&Firewire etc. traffic).
The "killer feature" of DDR for G4/G3 machines could at some point be the lower price of the memory.
PPC970 CPU will be able to fully utilize the DDR memory, but I think it takes a year or two before PPC970 exists inside Amigas.
...
It seems that the L3 cache does not give any huge speed improvement on 800Mhz A1G4XE machines. But for 1.3Ghz system (using 133MHz FSB) it should give big boost (~20%), same (or even more so) in high Mhz dual CPU configuration.
-
I think this is a case of "we're all right, but we're not talking about the same thing"
The Articia S isn't that bad on memory access. It's using SDR, after all. I've seen worse.
A G4 can't utilize DDR, because it's got a 133MHz FSB.
However, two G4 CPUs and an AGP card and a couple of PCI cards could very well manage to fill up the available bandwidth without much trouble. But you would surely need a DDR, AGP4x, PCI-X and dual CPU-capable northbridge. Where can you get that for PPC, I hear you say. Well, we're kinda waiting for MAI to get the Articia P released, and it has all this. I was going to write that it's too bad Motorola doesn't support 166MHz bus, but it seems from an article about the 970 I just read that Macs are sold with 167MHz FSB, so I dropped that. If this is indeed correct, you could see an Amiga using single G3/G4 or dual G4 processorswith AGP4x, PCI-X and PC2700 memory. It all boils down to MAI getting some wheels in motion, I guess... Hopefully they've managed to get a lot of the fixes done to the Articia S into the P.
Now, where does the 970 fit in? It doesn't, I'm afraid. It uses two 900MHz 32bit front side busses (one for reads and one for writes), and this needs a rather special northbridge. I have yet to see anyone claim to support this yet. I wish IBM was more like Intel in this regard. New CPU -> New chipset released same time. At least IBM are going to base their own Blade servers on the 970, so they must support it. But you don't need AGP on a Blade (but DDR memory and PCI-X for GigE is crucial). I don't know if anyone will start using PCI-X for graphics. Could be a useable solution...
-
@ olegil
PCI-X - I was under the impression that was a potential replacement in the works for the PCI bus, not AGP... AGP has got a few more years in it *at least* if not a decade :-) AGP-8x is more than most people are going to need for a while yet, and maybe there'll be a few more enhancements to that bus to go yet.
It could be with the IBM Blade servers that an AGP chipset is in use for graphics, in which case the chipset can talk to an AGP bus, in which case someone just needs to slap an AGP slot on the motherboard and play join the dots with some solder :-)
I could be talking out of my backside with that last paragraph, but it seems to me to be a good deal easier to adapt tried-and-tested technology than slap a custom new bus barely out of its nappies in something that's supposed to be 100% reliable, ie. a server. Considering Intel can't even get the PCI/AGP bus implementations right on recent chipsets*, I don't think IBM would go jumping in that severely at the deep end.
* - an amusing situation recently with an 845xx chipset for the P4 from Intel, a limit of 90MB/sec bandwidth (confirmed by Intel bug in product) to AGP/PCI combined... guess how quickly the system dies if you try to run a reasonable graphics resolution and then do some network transfers? :-) Answer: about five minutes. Nice.