I'm not doing OS development, so I understand my limited exposure, but I would think that if the design were done right, it should make things MORE cross platform.
It would be nice if the system were approached from a virtualization/emulation standpoint. What I mean is that the base OS could determine what processors were available when it initilized, and then could pass off tasks to the processors or subsystems that were most proficient at the particular task. This would require a base virtual machine/emulator that could be the fall back if the the target platform didn't exist.
This is one thing that AmigaDE had right in concept. We have seen this actually implemented with graphics cards. I don't know if it still has it, but at one time, any features of DirectX that were not implemented in the graphics card would be emulated in software. We already have very effective AMP in most systems with one processor running graphics, and a completely different architecture running the rest of the code.
As long as there is a fall back virtual machine, the effort to port to new platforms should be trivial in comparison to porting the entire OS. New platforms would require a small (in comparison) effort to get the base system up and running, and then anything after that would be optimization.
I suspect that in the short run, this would have a performance hit, but in the long run, it would keep the system from being tied to any one system. It would also allow for an unprecedented level of backward compatibility, as you could always run the old system as a subsystem of the new one.