Again, though, this is entirely possible to accomplish without mechanisms that could be co-opted to enforce software lock-in.
Interesting. So, what you're telling me is that a firmware based root-kit is absolutely possible to detect in software which has to implicitly trust the hardware which has been infected. The TPM can work directly with the CPU to look at code to be executed and halt the CPU if the code doesn't pass muster, with no software interaction what-so-ever. This virtually eliminates the capability of the hardware to lie to the operating system, as the TPM store is hardened against being manipulated without the proper key.
I'm beginning to build TC-based systems and I expect to use a private key which I generate for each customer.
And how many of them couldn't have been stopped with a less problematic method of boot security?
This does not rebuke in any way the efficacy of the platform. Although, in a well-locked down system it is possible that it simply would never have worked. I don't have enough information on it to speak at that level of certainty. Even so, I would not argue that I shouldn't bother to put a dead-bolt on my door because the window can be kicked in.
That's entirely my point - boot security isn't a question of whether you're using signed or unsigned software from an external source, it's whether you're letting there be an option of using software from an external source. If security is a concern and yet you're leaving the possibility of booting the system from a removable drive, you're doing it wrong.
I would say it's a matter of both. If the signature in the BIOS, option ROMs, boot block, etc. does not match what the TPM knows they should be, then it all stops. And in this scenario you could still use external boot devices under the purview of the platform, so long as the external source has been identified to the TPM.
Simply preventing booting from external devices is not sufficient.
You do have a fair point about BIOS passwords not being flawless - so make it a jumper setting on the board or something. It's not that hard to come up with solutions other than "put the entire system under the control of a licensing authority that you don't know can't be bought off or themselves cracked" (say, for example, DigiNotar's recent breach.)
A general compromise like that is always a problem. Look at the code signing certificates for Microsoft which were used to sign trojan drivers, or the sound card manufacturer whose signing keys were pilfered. Keeping your private keys private is absolutely a concern -- vendors are being encouraged to protect them as though lives depend upon their safety -- but not any reason to junk the entire idea. And it can be done; AFAIK, no one has yet cracked the encryption used to apply micro-code patches to Intel processors.
I asked the engineers present at the conference from AMD and Intel about the BIOS jumpers. It's a good idea, I think, but still not the magic bullet.
To summarize: password-protecting the BIOS, setting the BIOS jumper to "write protect," and disabling external boot devices is not enough anymore.