I don't think computational power is in any danger of peaking anytime soon.
It will just be moved to more multiprocessor tech.
Maybe the # of cores will begin to double every 2 years instead of computational speed of each core. Its still doubling processor power.
8 and 16 core desktops will be commonplace soon enough. But yes, this will just lead to less and less efficient software I think.
I'm not going to say "in the immediate future" or "
x years from now" (though I'd bet on sooner rather than later, myself.) Maybe it isn't coming until ten, twenty,
sixty years from now, but it
is coming. Multicore is great (something we should've been doing pervasively a
decade ago, honestly,) but it's nothing at
all like a perfect, problem-solved solution to the issue of peak circuit density.
For one thing, there's still the problem of things needing to be implemented in actual, physical silicon - with multicore it doesn't have to be denser, but if it's not it's going to take more die space, and the larger the die, the more signal transfer times become an issue - you can't just have a die a foot across filled with 80 quadzillion cores and expect them to all function in perfect sync with each other, at gigahertz-plus speeds. There's also the matter of control logic for the whole thing, like cache-coherence logic - the complexity of which is going to go up with every core by something like N² - N. The further you push N, the more absurd that's going to get. And speaking of complexity, there's the software side to consider - the operating system that has to schedule threads, pass messages, and arbitrate access to system resources across multiple cores. I don't know that that will
necessarily be as bad as the hardware side of things, but then it's still an issue of complexity that's going to scale at
least linearly with the number of cores - and worse, it's one that's going to eat up the very CPU time you gain by adding them! Which makes multicore as a whole ultimately a prospect of diminishing returns.
(All of that is, I'm sure, less true for special-purpose tasks like 3D rendering/shading than it is for general-purpose computing, but the problem is that general-purpose computing is what it is
most being falsely relied on to fix.)
So no. Multicore processing is great for what it is, but it
isn't a solution to the problem of never being able to reach infinity, and relying on it to be that will just mean putting off the confrontation further, and making it harder when it finally does hit.
Not that that will stop anybody.