I don’t see much difference between that and storing the key on a TPM. If you have one key and you lose access to that key, then you lose access to the server.
Point: you need a backup key anyway.
Just paste all of your devices' public keys into your authorized_keys file and leave a comment at the end for what device it's for. in Userify, it literally goes right into your nodes' authorized_keys file almost verbatim. (disclaimer: I work at https://Userify.com)
And then, if you leave your token or laptop at the airport or whatever, just remove that key right from your phone and it'll take effect in seconds across all the nodes/instances (if you're using Userify) or you can just write a quick for-inline-sed loop to remove it from your authorized keys everywhere.
OTOH for SSH use if you lose the key you just create a new one, it's not like you've lost the only copy of your Bitlocker key.
I don't think average Joe is going to understand these passkeys either.
With a password, you open your password manager, copy the password in memory, paste it into the input field and trust that nobody could read it from your clipboard and that the program handling the password does it correctly. If your password leaks on the way, it's leaked.
With FIDO2, the server sends a challenge and asks your HSM (or TPM, not sure what the right word is) to sign it with your private key. So the server can verify that you own the private key, but if the challenge or the response leaks, it's just this one time. Next time it will be a new challenge.
Also for the average Joe, the result is that the "passkey" is the fingerprint or the face recognition and there is no password. It feels like they have only one password: the biometry/face recognition (or a master password, I guess?). So passkeys are superior to passwords in that sense.
Fun fact 1: some people hate passkeys because they don't want to be forced to rely on TooBigTech for them. Currently I use my Yubikeys as passkeys everywhere and it works well, so I do NOT depend on TooBigTech.
Fun fact 2: FIDO2 on current Yubikeys (and HSM in general, I think) tend to use classic cryptography which would be broken by quantum computers. A password used with symmetric encryption is not broken by quantum computers. So there may be a period of time where this becomes a tradeoff (you may have to decide whether the most likely attack is a quantum computer breaking your authentication or a malware stealing your password)?
However not all devices play well with it, e.g. iOS and Android don't ask 1pass for the passkey. I also couldn't make it ask NFC for my hardware Yubikey with passkeys, but maybe I just did something wrong.
As someone who stores my SSH keys in my TPM, and has struggled with picking the right PCR values for Secure Boot in the past, I'm interested in learning more.
This may be bash-only, but a space before the command excludes something from history too.
Personally I like this which reduces noise in history from duplicate lines too. export HISTCONTROL=ignoreboth:erasedups
I can't find it now, but I believe someone from Tailscale commented on HN (or was it github?) on what they ran into and why the default was reverted so that things were not stored in the TPM.
EDIT: just saw the mention in the article about the BIOS updates.
https://github.com/tailscale/tailscale/issues/17622
https://news.ycombinator.com/item?id=46532666 (direct comment link, more discussion on the issue in the parent)
In theory the Linux kernel keyring would help here, even with a tsm or in conjunction with it.
Unfortunately as the industry abandoned the core Unix permission system (uid/gid) all of these methods just get a devfs[null] bind mount.
Only process that also support the traditional co-hosting model like nginx and Postgres do.
We would need nonce keys to gain no value from kernel memory or hardware storage.
You just need to tighten your sshd config, you can even add a "touch required" of the Yubikey to the sshd config. Has been in debian stable since like 11 I think?
So it's super friendly to integrate and very secure, as you need to physically be on your pc, have your yubikey and have your exact pc. So that's a lot of factors.
Token-based keys, to tptacek's point, is that they can be a giant pain once you start scripting across fleets.
And keys cannot be stolen from backups.
Or stolen without your knowledge when you left your laptop unguarded for 5min.
Not every attacker has persistent undetected access. If the key can be copied then there's no opportunity for the original machine's tripwires to be triggered by its use. Every second malware runs is a risk of it being detected. Not so, or not in the same way, with a copied key.
This is really cool and goes beyond the usual steps of securing the key, but handling "what you see is what you sign" and key usage user confirmation at the OS level, which can be compromised much more easily (both input and output).
A laptop and a phone are both general purpose computers with "TPM chips", so "you could implement that on android" is as true as "you could implement that on a white computer".
There was something about Macs. It took them a while to get a TPM. But I think now they do, so macs can do it too.
Of course a real secure attention sequence would be preferable, such as e.g. requiring a Touch ID press on macOS for keys stored in the Secure Enclave. Not sure if TPM supports something similar when a fingerprint sensor is present?
The PIN can be an arbitrary string (password).
In addition to biometric authentication, Windows Hello supports authentication with a PIN. By default, Windows requires a PIN to consist of four digits, but can be configured to permit more complex PINs. However, a PIN is not a simpler password. While passwords are transmitted to domain controllers, PINs are not. They are tied to one device, and if compromised, only one device is affected. Backed by a Trusted Platform Module (TPM) chip, Windows uses PINs to create strong asymmetric key pairs. As such, the authentication token transmitted to the server is harder to crack. In addition, whereas weak passwords may be broken via rainbow tables, TPM causes the much-simpler Windows PINs to be resilient to brute-force attacks.[139]
https://en.wikipedia.org/wiki/Windows_10#System_securitySo you see, Microsoft needs a way to describe an access code that isn't a password, because it's more secure than that, but yet it isn't exactly a number, so what do you call it? "PIN" is perhaps an unfair recycling of an in-use term, but should they coin a neologism instead? Would that be less confusing?
If your computer is compromised after you've already entered the PIN, or there is an app running on the computer but it is not sufficiently privileged to sit in between you and the TPM, no.
That's quite good protection generally. The defense against this type of attack is to get a smartcard reader with an on-board PIN entry keypad - those do exist, but it's quite a step.
Well no thanks, that risk is much higher than what this is worth.
There's also the TPM speed issue. My computer takes ~500ms to sign with an ECC256 key with the TPM, which starts to become an issue when running scripts that use git operations in serial. This is a recurring problem that people tend to blame on export controls: https://stiankri.substack.com/p/tpm-performance
I put my ssh keys into the Mac’s TPM and now it asks for a password/touch ID when I use it.
Unfortunately I forget what commands I used
If you are doing something illegal or controversial with the key, then yes it would be foolish to store it in the cloud.
If your main concern is it becoming compromised due to a local exploit or physical breach, then I'd argue it is a strong option.
"Not trusting a private company" does not equate to "doing something illegal or controversial", though.
Since it just uses PKCS#11, it also works with tpm_pkcs11. Source for the various bits that are bundled is here [1].
Here's an overview of how it works:
1. Application asks to sign with GPG Key "1ABD0F4F95D89E15C2F5364D2B523B4FDC488AC7"
2. GPG looks at its key database and sees GPG Key "1ABD...8AC7" is a smartcard, reaches out to Smartcard Daemon (SCD), launching if needed -- this launches gnupg-pkcs11-scd per configuration
3. gnupg-pkcs11-scd loads the SSH Agent PKCS#11 module into its shared memory and initializes it and asks it to List Objects
4. The SSH Agent PKCS#11 module connects to the SSH Agent socket provided by Keeta Agent and asks it to List Keys
5. Key list is converted from SSH Agent protocol to PKCS#11 response by SSH Agent PKCS#11 module
6. Key list is converted from PKCS#11 response to gnupg-scd response by gnugpg-pkcs11-scd
7. GPG Reads the response and if the key is found, asks the SCD (gnugpg-pkcs11-scd) to Sign a hash of the Material
8. gnupg-pkgcs11-scd asks the PKCS#11 module to sign using the specified object by its Object ID
9. PKCS#11 module sends a message to Secretive over the SSH Agent socket to sign the material using a specific key (identified by its Key ID) using the requested signing algorithm and raw signing (i.e., no hashing)
10. Response makes it back through all those same layers unmodified except for wrapping
(illustrated at [2])
[0] https://github.com/KeetaNetwork/agent
[1] https://github.com/KeetaNetwork/agent/tree/main/Agent/gnupg/...
I'm wondering why that doesn't apply here. The TPM holds the key to the cipher that is protecting your private keys. Someone uses some kind of RCE or LPE to get privileged access to your system. Now it sits and waits for you to do something that requires access to your SSH keys. When you do that you are expecting whatever user prompts come up, the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere. I'm not even positive that they need high degree of privileges on your box, if they can manipulate your invocation of the ssh client, by modifying your PATH or adding an ssh wrapper to something already in your path, then this pattern will also work.
What am I gaining from using this method that I don't get from using a password on my ssh private key?
Further promises are RTC that can prevent bruteforce (forced wait after wrong password entry) or locking itself after too many wrong attempts.
A good MCU receives the challenge and only replies with the signature, if the password was correct. You can argue that a phone with a Titan security chip is a type of TPM too. In the end it doesn't matter. I chose the solution that works best for me, where I can either only have all keys in my smart card or an offline paper wallet too in a fireproof safe. The choice is the user's.
Technically, a private key that was imported (and is marked as exportable) to a PKCS#11 device can subsequently be re-exported (but even then, during normal operation the device itself handles the crypto), but a key generated on-device and marked as non-exportable guarantees the private key never leaves the physical device.
https://wiki.archlinux.org/title/SSH_keys#Storing_SSH_keys_o...
And even the password can be forced to be re-entered by the agent for every use, if that level of security is wanted.
Depending on which authenticator app (or maybe applies to all?), that data either is, or can be, backed up.
A yubikey cannot be cloned.[1]
> the malware rides along this expectation and gets ahold of your private SSH keys and stores them or sends them off somewhere.
Ah, this is where your misunderstanding lies. No, the crypto operation runs ON the TPM or yubikey. The actual secret key NEVER lives in RAM. (ehem, after it was imported, if importing is the method by which it was generated)
[1] You know what I mean. Of course in principle it can be. But not like a phone where it can literally be sent via scp.
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
Yes, with TPM and yubikey you have the option to store the per key material on disk, encrypted by the TPM. But the way this is then used is that the PKCS software sends that encrypted blob AND the requested operation, and gets only the output back. The CPU doesn't get the SSH private key back. Just the output of the RSA operation using the key.
What am I missing?
It does mean that they can't use the key a thousand times. But once? Yeah sure.
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
https://www.stavros.io/posts/u2f-fido2-with-ssh/
We've got private Git repos only accessible through ssh (and the users' shell is set to git-shell) and it's SSH only through Yubikey. The challenge to auth happens inside the Yubikey and the secret never leaves the Yubikey.
This doesn't solve all the worlds' problem (like hunger and war) but at least people are definitely NOT committing to the repo without physically having access to the Yubikey and pushing on it (now ofc a dev's computer may be compromised and he may confirm auth on his Yubikey and push things he didn't meant to but that's a far cry from "we stole your private SSH key after you entered your passphrase a friday evening and are now pushing stuff in your name to 100 repos of yours during the week-end").
"Your SSH keys" aren't really part of that threat model. "You" know the device you're connecting from (or to, though generally it's the client that's the mobile/untrusted thing). It's... yours. Or under your control.
All the stuff in the article about how the TPM contents can't be extracted is true, but missing the point. Yes, you need your own (outer) credentials to extract access to the (inner) credentials, which is no more or less true than just using your own credentials in the first place via something boring like a passphrase. It's an extra layer of indirection without value if all the hardware is yours.
TPMs and secure enclaves only matter when there's a third party watching[1] who needs to know the transaction is legitimate.
[1] An employer, a bank, a cloud service provider, a mobile platform vendor, etc... This stuff has value! But not to you.
What on earth do you think I make my users present keys for???
You know all those guides saying "you should never copy an ssh private key over the network. Make a new one for each device" that every idiot dev ignored? Now I can enforce that.
Not a chance. It is my key.
So does a pass phrase though, with significant less complexity and fragility.
Again, the linked article and responses here are making IMHO a pretty bad mistake with threat model analysis.
Which is what SSH keys are for?
The advantage of this approach is that malware can't just send off your private key file to its servers.
The use case is ssh keys! If malware can run an ssh command on the remote host, it doesn't need to steal your key, it can just install itself there. Or add its own keys to the access, etc... At best, you'd have to detect and fix that sort of thing with auditing and control, something that's isomorphic to the "third party" requirements I was mentioning.
To repeat the third time: this is all terrible threat model analysis. TPMs do not have value for individuals managing access between trusted devices. TPMs are for third-party validation.
But you just admitted that it prevents the key from being stolen, right? So the value is that the key cannot be stolen. Doesn't mean that a malware cannot use it of course. Just that it cannot extract it. Which is better than a malware extracting it.
Keep a CA (constrained to your one identity) with a longish (90 day?) TTL on the TPM. Use it to sign a short lived (16h?) keys from your TPM, use that as your working key.
But, if you are making a lot of x509 authenticated calls directly, then the speed and not needing to touch the key are important. Or if you need to ssh to 10,000 hosts quickly, things like that.
that's not good
I saw a write up where someone successfully got sshd to use a host key from a fido2 yubikey without touch, but I can't find it...
As far as "TPM vs HSM", it is soooo much simpler to make a key pair with a fido2 hardware key:
ssh-keygen -t ed25519-sk -O resident -O verify-required -C "your_email@example.com"
You can get them for <$30.