Hmm, guess it's IA64-targeted. Which, again, makes it look like much more of an 'enterprise' feature, and much less like something that'll impact the home user in any way. (In other words, putting it on "the good ship Itanic" means they can focus on the RAM, disk, and network I/O problems first, and put graphics performance on the back burner. Servers will do well enough even if they're stuck with easily-buffered/virtualized VESA framebuffers for video.)
http://www.pcworld.com/news/article/0,aid,112550,00.aspNewScientist's
infographic was obviously cooked up by their own staff... Mac OS? To my knowledge, Intel has never said one word about instruction set virtualization. (And the whole *point* of Vanderpool is to have multiple physical cores, negating the need for it. You just run the same native x86/IA64 code directly on one of the cores, and make what are supposed to be relatively minor modifications to the way you handle I/O. The idea is that *having the second CPU core* does the work and provides the isolation, getting around all the bottlenecks of a software emulation - VMWare, FX32.)
--
Now, Intel doesn't talk about instruction set virtualization, because that would put them out of business, but they do it when they need to; "IA32" is just a big legacy abstraction atop any of today's superscalar chips, but unless you're Transmeta, all that conversion up and back gets done in hardware, microcode at the worst.
Until Itanium. Here, as I understand it, Intel keeps going back and forth; the original attempted to pretend the native VLIW core could be abused as an IA32 backend??, then
performance was so bad that they picked up FX32 people and announced btrans, the Itanium II has been out for a while (I have no idea which solution is used, or even available, there), and from The Register/The Inquirer, I gather there's some argument over just wedging a P4 or Xeon core on the chip for III or IV? (Not hard, considering that, *whew* like I said above, P4 and Xeon are going multicore in their standard 32-bit/maybe-AMD64-compatible variants as well, meaning the individual cores should get smaller and smaller relative to chip densities that scale with Moore's Law. Now that they've run out of extensions to use up die space with -- MMX, SSE, SSE2, HyperThreading -- they can let manufacturing processes get denser until a single x86 core only fills 1/3 of what's economical size for a chip... Then use the rest of the space for 2 more x86s, or 1 Itanium.)
None of this has anything to do with 'crossplatform' solutions off Intel's own hardware, unless you really expect to be running SoftWindows or some other x86 (or IA64?) emu on your cellphone. (At which point it'd be cheaper and more performant to buy a $300 x86 box from Wal-Mart and use the phone to VNC into it. Which itself is a model '3G' providers are looking at to provide 'walled-garden' services versus DE or anything-that-looks-like-an-OS 'freedom,' because then they preserve an excuse to charge you by airtime or bytes every time you play a game.)
And before anyone forms an idea from this post without reading the prior, I have to pat my pile of AMD stock and reiterate that Opteron and Athlon64 are poised to go multicore, too... In fact, you can argue the way they've designed the existing line leaves more pieces in place than Intel with SMT. (Opteron is all crossbar'd up and ready to play; I'm not sure what Intel's up to these days, but they seem to like redesigning buses, and now they have those questionable SMT units to preserve per-core...)