If you're wondering what they mean by this, [1] has been around since 2018. It's not unusual for a motherboard to put the TPM on a removable module, so you don't even have to desolder the chip to MITM the communications.
The most recent Intel and AMD CPUs have "firmware TPMs" that run in the CPU's so-called "trusted execution environment" so there's no I2C to interpose. Of course, that doesn't mean you're protected against attackers who have physical access to the machine; they can simply install a keylogger.
But CPU-side software needs to use it, and without default well-known keys...
You either need an interactively provided PIN, or a TPM integrated into the CPU/SoC to be secure in such a scenario.
fTPM is a firmware-based TPM implemented, usually, by coprocessor (or trustzone style enclave) inside the CPU, yes. It's not related to what TPM standard it implements
You can also have external TPM 2.0 compliant devices (commonly referred to as dTPM, probably brought the naming from iGPU/dGPU), and in fact many options offered for making desktops fully compliant with windows 11 (which requires TPM 2.0) involve a dedicated TPM 2.0 chip.
Ultimately, TPM standard does not care where the chip is, it just provides mechanisms for their use, which do include encrypted tamper protected interface... if one wants to use it.
Bitlocker is traditionally the implementation susceptible for this attack, but for that I'll just defer to Chris Fenner.
I supposse a PIN is a slight improvement over a regular password, but a big appeal of TPM FDE in my opinion is unattended unlock.
I think discrete TPMs don't really have a future in systems that need robust system state attestation (both local and remote) against attackers with physical access. TPMs should be integrated into the CPU/SoC to defend against such attacks.
What are your thoughts on Microsoft Pluton and Google OpenTitan as TPM alternatives/emulators?
Should system attestation roots of trust be based on open-source firmware?
Recent AI/Copilot PCs based on Qualcomm SDXE/Oryon/Nuvia, AMD Zen5 and Intel Lunar Lake include Microsoft Pluton.
this is completely incorrect, encrypted sessions defeat TPM interposers when there is a factory burned-in processor side secret to use. lol at being just "obfuscation" because you can spend $5m to decap and fetch the key then put the processor back into working order for the attack.
that just requires a vertically integrated device instead of a consumer part-swappable PC.
Phrases like this give me the shivers, as it translates into "mandatory surveillance by some authority telling me what I can and can't do with my computer".
TPM is an evil concept. Physical access should be final.
How would that attack work if someone stole my Ryzen powered laptop with full disk encryption, TPM2.0 and secure boot with firmware password enabled?
The screen/keyboard is not authenticated to the user, and TPM is not capable of fixing that.
It doesn't require some state actor to do that. Just money.
User is required to not enter the second password if the wrong security image is displayed.
You can still attack it with a fancy radio transmitter which transmits the security image from the stolen laptop when it's displayed after you've entered the first password to the second laptop.
All bets are off if your attacker is determined and has physical access.
Firmware TPMs (fTPMs) are faster, but I doubt they're really fast enough to use as an HSM.
There are TPM APIs for Java, so you can do this, but it's not surprising that the Java keystore providers lack builtin support because of the performance issues.
Ideally fTPMs should come with EKcerts and platform certificates and they would be very fast and as secure as (more so than) dTPMs. Then using fTPMs as HSMs might take off.
I know it's awful, but probably not as awful as a hardcoded passphrase.
If the chain is protected by the tpm, this method if implemented correctly through the whole chain should protect your cert and pkey.
that being said _should_ is the keyword,. i dont think any platform really managed to escape all attacks, though a lot in this area do need hw access (the tweezers previously implemented by the author :)).
This is a common misconception.
It's just not widely used for other applications.
For some people, this is a useful increase in security. Those people set up their own TPM according to their own rules. For the rest of us, who had one forced on us by Microsoft, it's just more anti-right-to-repair.
Also, I'd be pretty frustrated if I was sharing a PC with someone and they got me banned from a game.
Stallman was right: https://www.gnu.org/philosophy/can-you-trust.en.html
(Last few paragraphs.)
See also: https://gabrielsieben.tech/2022/07/29/remote-assertion-is-co...
Of course TPMs can be (ab)used for DRM, but the same property in general to many ideas in cryptography. We still don't say AES or RSA are tools designed to restrict your rights.
In reality TPMs are almost always used to (attempt to) protect the user's data over restricting them.
I would argue that the discrete chip variation of them aren't very good at this (and even less good at DRM), but a lousy implementation doesn't mean the concept is bad. (As Foxboron mentioned earlier in this thread, discrete TPMs can still act as reasonably good "discounted" SmartCards, but they are bad at platforms state attestation.)
In fact I would have much preferred if the industry embraced the measured boot idea more instead of mainly pushing stricter verified boot schemes.
AES and RSA are just algorithms, not implementations. I'd compare TPMs to HDCP, AACS, or CSS (the DVD one) instead.
Ah, the tweezers strike again, just not for Nintendo this time. Truly the most universal hardware hacking tool.
Although, designing against physical attacks is very difficult, so I guess there’s no need to imagine a good-hearted conspiracy of conscientious hardware folks.
There is a reason why a lot of system integrate the security processor on the same piece of silicon whose state the security processor is meant to protect.
The reason discrete TPMs exist is supposed compliance with crypto standards, and physical protection against key extraction, but they sort of miss the forest before the trees. What matters to users is the protection of their data, not the TPM's secrets, and discrete TPMs arent very good at the former.
But there’s a pretty big social harm to locking people out of their devices, like the generation of tech-illiterate kids growing up that haven’t been allowed to break their computers well enough to learn anything about them.
But on essentially all existing UEFI systems you can trivially overwrite the "db" keystore in flash and install anything you please.
Also most (all?) UEFI systems are not locked to Windows and allow customizing the keystore via the firmware console interface anyhow.
All of them.
The Secured Core machines still allows you to reset Secure Boot into user mode as mandated by the spec.
But, did no one stop and question whether a TPM should have been on a dedicated block that couldn't be reprogrammed rather than assuming there wouldn't be bugs or whatever in the GPIO pin muxing? Never mind all the additional complexity of assuming page permissions access/etc to shared purpose MMIO regions?
So, IMHO this starts as a hardware bug.
So either the pin is configurable, or you've wasted a pin that could otherwise be used for decorating the motherboard with RGB LEDs.
Also, the pin layout has to be standardized by the socket specification (eg "LGA 2011"), which may have to retain compatibility for a decade or more. This strongly favors defining reconfigurable over fixed-function pins.