OK guys, nostalgia is good, but ignorance is also not bliss, it's poverty (of the mind and the intellect in this case).
I don't know of any dual core that is rated as low as 1.5Ghz. If it's sold as 1.5Ghz then that means BOTH cores run at that speed. Most dual cores are around the 2-2.5+ Ghz range right now, and that gives an ADDITIVE or TOTAL speed of 4-5Ghz.
Now, the issue is not that those machines aren't 4-5Ghz fast. They *ARE*, if you have the RIGHT software. Thus the problem is with software. You see, hardware has advanced much faster than software. Although the hardware advance, to be honest, wasn't an advance that required as much intellect - just take 1 CPU core and stitch it together with another on the same die - big wooptie doo.
Now back to software: software is DUMB DUMB DUMB (disclaimer: I'm a software engineer). You see, even though these monster machines have 2 or 4 or 8 or however many cores, the software isn't at the stage of intelligence where it can distribute itself all over those cores to take advantage.
This is the current state of how multi-cores are being utilized:
Normally, when you run a modern OS, you have multiple applications running. They usually run on different cores (sometimes they migrate between cores, which is time consuming and wasteful - forcing them to a certain core gets rid of this waste - this is called CPU affinity). This works well for servers where multiple version of a web server (ex. apache), or other software is run (for dynamic pages you'd run multiple Python, PHP, Ruby, Lua, etc. scripts, one for each page that gets "hit").
But if you're a desktop user, you don't normally run many power hungry apps. Even if you run 10 programs "at the same time", like office productivty tools, they're still just WAITING for you to type or move the mouse. When you *really* want performance is when you've got one power hungry app, like a hardcore game, like Crysis, Quake 4, etc.
Like previously mentioned, one way to utilize more than 1 core in such a circumstance is to use threads. Threads basically allow a program to run parts of itself as separate processes, thus on more than 1 core. The problem though is that thread programming is 1) hard (very error prone to what are known as race and deadlock conditions) 2) wasteful as you need to spend a lot of time synchronizing threads (because they must communicate their results with each other, or the main thread) and 3) there's only so much you can do with threads before things get unwieldy.
An example for 3) is a game where it uses thread A for main game logic, thread B for sound, thread C for graphics. If most of your execution time is spent (typically this is the case) on thread C, and you have 4 cores, then it's not good enough to only have 3 threads. You really want to break thread C into multiple threads again, so that it gets divided among all CPUs. But even with 4 threads, you soon realize that sound processing doesn't really fully use its core. So you want to break down the expensive graphics thread to even more sub-threads. And thus the nightmare begins: should it work on 2 cores? 4 cores? 8 cores? Heck, you say let's make it dynamic to adjust depending on cores. But then, as already mentioned, you hit the dreaded "iso-efficiency" problems: more "overhead work" versus "useful work", because you're spending all your time distributing the work, which is in too small packets for too much effort, while it would have been efficient to keep larger packets of work on fewer cores. This is known as the granularity level. Too fine granularity and you've got more overhead than actual work being done. Too coarse granularity and you're under-utilizing your cores because the chunks aren't split in enough pieces to fill the cores.
This was also in effect a real problem with PPC Amigas as well: without the extra effort by the programmers to split the work between the 68k and the PPC, there was no benefit to the "dual CPU" PPC cards. And also very real was the fact that switching between the PPC and the 68k had a lot of overhead time.
As you can see, this is soon a nightmare of huge proportions. So people thought: why not make the computers solve this problem. Onto the next section:
One of the *real big* problems that a major portion of the software industry and lots of us computer science majors are facing today, as far as advancing towards the multi-core/super-parallel future, is in the compilers. If compilers were smart enough to break down execution of code so that the programmer doesn't have to spend tons and tons of hours to write parallelizable code, then you could simply recompile your code to use more cores and it would work faster and better. Unfortunately due to the some of the reasons mentioned above such compilers are extremely hard to get right, and there isn't really any one that has accomplished this to a great extent, as of yet.
Now some theory (ramblings). Part of the problem I believe is in our programming paradigms. We view programming like we always have, and using languages that we always have used. Like common speech, I believe language is an enabler and an inhibiter. If your language isn't capable of letting you express a certain class of thoughts, you might never ever have thoughts of such a class in your brain. It can hold you back. On the other hand, if the language enables you to have thoughts of higher levels, due to higher levels of complexity and expression, then I believe you will be endowed with more expressive and thus more complicated and possibly more intelligent thoughts. This I believe holds for computer languages as well.
We're currently stuck with some very very bad technologies, as mentioned before. Although I enjoyed my x86 years, and I did years of intel assembly, the instruction set was horrible compared to the 68000. As a programmer I always wanted to have the 16 general purpose registers offered by the 68000 - but only got 8. This stupid x86 ISA is *still* here, in all 64bit chips (although they are internally RISC-like, they convert x86 ISA to micro-ops, and execute those). Another major problem is the Von Neuman architecture. Computers work on the principle of Fetch instruction - Decode instruction - Execute instruction - Store result. This limits us to certain subsets of problems or approaches to problem solving (ex: SIMD by default operate on multiple data chunks - this changes the way you program when using SIMD - you think in parallel by default). Think of anti-machines as a totally inverted example of how computing can be achieved (this field seems ever more hopeful a future with the advent of FPGAs and reconfigurable chips). Then there are biological computing devices which work on different principles (just to give a small example, the neurons in your brain not only work based on a "flat model" of connections, but depending where on the neuron's surface, which is 3D dimensional, a connection is made, makes a real difference in the results), and even quantum computing devices on yet other principles. Another major problem in my opinion is the stagnation of the masses, pioneered by none other than Microsoft and their technologies. They have dominated the software market for decades with C++, which is an extremely unclean and really retarded object oriented language, which has shown very little innovation and ability to literally break through the old programming paradigm expressions onto a new playing field.
Anyways, enough about the non-existant "multi-core hype". It's no hype. It's real. We've got idling screaming machines and "don't know what to do with them" as far as normal desktop use is concerned. Server people and internet companies know very well what to do with them. So do all the physicists and scientists that do massive data crunching. For the desktop, it's primarily games that will be pushing the envelope. From my personal experience I also believe Apple is heading in the right direction with all their new apps and APIs (Core Animation as a small example) exploiting more and more the underlying hardware architecture (not just Time Machine, but LLVM used in the OpenGL core and other parts).
Now I *really* feel very nostalgic about the good ol' SIMPLE days of single-core, single-CPU non-memory protected multitasking! ;-) Sigh.....