Welcome, Guest. Please login or register.

Author Topic: a wacky idea - network generalized distributed computing  (Read 2348 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline Waccoon

  • Hero Member
  • *****
  • Join Date: Apr 2002
  • Posts: 1057
    • Show all replies
Re: a wacky idea - network generalized distributed computing
« on: December 25, 2006, 03:19:36 AM »
First, there's effeciency problems.  When working with things like MPEG decoding, the resulting data stream tends to be huge and that clogs network bandwitdh.  Getting data in and out of a CPU is often more of a problem than making the CPU itself fast enough.  Networks are thousands of times worse!

Second, distributed computing only works well with massivlely parallel computations.  Many tasks are not, which is why even multithreaded applications for multiple cores are slow to take advantage of the new dual and quad-core CPUs.  It's very impractical on many levels, and that's just why supercomputers are still in demand, instead of having long been replaced by larged networked farms of cheap computers.

Third, the actual benefit is not really that huge.  It looks good on paper, but real-world results are a different story (the PS3 has this problem, big time).

Quote
ptek:  It seems that you didn't catch my idea. I refered the existence of a CPU usage manager, so the idea was not to use 100% of the other's CPU but only slices of it. Otherwise, would be unfair.


That's pretty much how they work now, using WAIT instructions as appropriate to reduce waste.  Power distribution tends to be a problem with modern chips, though, and adding transistors to redistribute power just adds peak-power usage.  If you open up a CPU case these days, you're going to see big plates of copper covering the surface of the chip to distribute power, rather than an interesting grid of wires.  Promotional photos of microchips rarely reflect the chips actually packaged in real equipment.

Task scheduling is a much debated problem with modern kernel design.  How does the OS tell the difference between a lazy loop and an intensive tight loop?  The application is really responsible for telling the OS how much CPU time it needs, and you can't expect programmers to cooperate like that.  Taking advantage of an external idle CPU, or trying to shy away from peak-performance computing is very difficult.  There *are* reasons why it's not done.