@ jeffimix
I always thought that you could write the OS to take full advantage of the two processors, so that anything written nto hardware banging, but through the OS, would take advantage as transferred.
Basically yes (but also take into account what I said previously in this thread about applications support). Programs hitting the hardware should be a thing of the past with a decent operating system behind the wheel anyway. You can hardly have an SMP-capable operating system and still allow applications to hit the hardware directly.
Second, I know that two 800Mhz processors Don't = a 1600Mhz processor, they have overhead, which is what stops them, correct?
Not sure what you mean by that. I'll try to explain a little further. Assuming that the 1600MHz processor and the 800MHz processor are made by the same people the same way, and one is simply clocked twice as fast as the other (very theoretical scenario), then you'll have the rough equivalent of the following comparison:
There's a piece of work to be done, and you have the following choices: You can give it to one person, who is especially experienced at doing that work quickly (the equivalent of the 1.6GHz processor), or you can give it to two people who aren't so used to doing it, so if only one of them were doing it, they'd do it at half the pace of the faster person. Now, in the real-life situation, there are many factors involved in finding out which party would get the work done quicker. When two people are working together on something, there can be many bottlenecks to getting the work done quicker. There might only be one pen that they can use, so one would have to wait till the other is finished, as an example :-)
In computing terms, it would be like for example, you having a task, say for example, you have a UNIX-variant box, and you want to compile a hefty program from its source, say for example Mozilla. Now obviously CPU is an important part of the equation to compiling the source quicker, but the process of compiling requires a lot of reading and writing to disk, lots of small files, which means that much of the time the CPU is going to be waiting for that to happen. In which case, a dual CPU machine isn't going to be of much benefit. It's better to improve the compiling process, so that you read a chunk of files from disk, write the compiled chunk to memory, and when you have a reasonable size chunk to write back to disk again, then write it.
Hard disks may be cheap nowadays, and they may claim to be able to do 50MB/sec, but comparing them to the capabilities of RAM throughput or CPU throughput, they're like a modem compared to broadband. *Very* high latency on reading small files, that is why something like WIndows takes much the same time to boot even when you get a faster hard disk. It's only when you do a very drastic comparison, say a pre-UDMA hard disk to post UDMA100, that you'd see the kind of drastic difference you'd prefer to see when reading and writing small files. Look at 99% of the files used by the operating system, even with a bloater like Windows 2000, they're a few megs maximum size, quite frankly, who cares if you can read off a 50MB file in one second, when say ten 5MB files takes you much much longer. Which is why RAID striping makes such a huge difference.