Assuming you do streaming only. When you add on mixing, 3D effects, filtering... then your CPU cycles will start to disappear real fast. Piping information is easy. I'ts procesing that's hard.
Indeed, and we were promised some sort of miracle API for that 5 years ago... but who expects it to be ready? ;-)
That's what many people don't realize about external USB2.0 hard drives. I was going to get one myself to replace my internal backup hard drive, until I realized that many external drives can suck between 25% to 50% CPU utilization. Ouch.
This could depend on your USB host controller. Which does mean I'll have to go back and look at what those 50% utilization reviews tested with.
[Laptop...]
Well, given the determination and fanaticism of the Amgia community, I suppose someone will try. But, it still won't be practical.
Tough call. If Eyetech sticks with PCI-104 for a while, it could almost make sense. If marketed as a proverbial PAWS-104, it might even have some appeal as 'test equipment' in the broader embedded market. But it probably still wouldn't come cheap... and to be at all applicable to the consumer 'desknote' market, you'd want to come up with some way to clip to a normal PCI slot anyway.
Those mini-PC designs annoy me. Why make a mini tower a foot tall and put it beside the monitor when you can put it behind?
Well, it just hit me that it's a division-of-labor problem; many modern LCDs need either a weight or a complex stand, and it's better to incur the cost of shipping the brick (semi-literally) with the high-margin display than the low-margin case.
But inspired by photos claiming to be the new iMac, as well as Gateway's old attempt to compete with it... Therein lies the solution: Make an ultralight ITX enclosure that can bolt down to the VESA mounting points on the back of the monitor. You won't be able to fit a Prescott with heatpipes in it, of course, but an A1 or C3, active cooling (for reasons of weight), 12v supply and 2.5" drive should live... Might want to stick with an external optical movement.
(That said, that's one of the reasons for the shoeboxes -- they're flexible enough to be used if you *do* swingarm or wall-mount the display... and they riff on the Walker, so we should feel flattered, right?)
But there's no retaining mechanism for something that heavy,* so welcome to catch-22.)
That's the limiting factor with the "module" design. OK, you got your shiny new G4 or G5 in your AmigaOne. How do you cool the beast with something that won't crack the chip or fall off?
True, true... Except I lied. There are still through-holes for the standard GPU cooler spring-clip mount, right? The only problem is that nobody makes GPU coolers that tall (well, except nVidia, and those aren't passive), because they'd interfere with every other slot in the system. Now, if you *drilled out* a nice Socket 7 sink, you might be able to clip it on there... and if sticking by aluminum, it wouldn't be heavy enough to pull out that MegArray... but you'd have to pay someone to do the drilling, and that'd add enough in labor to beat the claimed $10 markup on $7 coolers.
It doesn't help that the PPC is used in embedded applications where custom heatsinks are used. IBM isn't capping their cores except on the low-end G3, which doesn't really need a heatsink, anyway.
How exactly do you cap a microBGA or whatever these things are? A better question would be why, if so easy to design around, they seem to need more discrete components around the chip than even an Intel-based design?