To decrease the inefficiency, you'd have to send large large chunks of data instead of swapping bits back and forth
Not all task should be distributed, even if this system would be integrated on the OS... That's why the need of a CPU manager to decide the need to handle the task locally or not.
I think the approach of selecting only the programs which we wanted to take advantage of this "farm" would be best, since there would be an unavoidable delay between task requests and their completion by the other remote computers. Of course the data to be processed would have to be sent is some quantities and not bit by bit, along with the "instructions" of what to do with it for the remote(s) computer(s) selected to handle it. There should be a correct management of their resources, so only close to idle computers would take the work, but never taking 100% of their CPU ! Everyone
who decided to join the "farm" should benefit, like the ones who are at P2P SW like bittorrent.
Besides, I thought this working multi-platform. So this could be a good use for intel/AMD many Ghz power machines to do in their idle times. Classic Amigas with theirs motorola CPUs would suffer a bit. Maybe AROS running on x86 would be a better scenario.
The difference from SETI@home is that any program/task may be distributed. SETI only handles SETI task

This are just thought of someone (me) that doesn't have any pratical or theoretical knowledge of distributed computing, so I named the idea like "wacky". But, who knows, maybe some of what I described here could work.