I love passkeys, but they're still kinda hard to use. There's several sites that wont let you enroll multiple ones and it's easy for systems to step on each other like the aforementioned experience.
The problem is fallback. All my banking apps have SMS OTP fallbacks and that's no better than having only SMS OTP. If you're building these systems make sure you have good fallbacks. What matters in design is not so much how well it works when things go right but how well it works when things go wrong. With security you really cannot ignore edge cases
The easier it is to do things, like use another channel, the harder it is to keep secure.
The easier it is to keep secure, the harder it is to use.
Apple wants you to use iCloud passkeys, Microsoft wants you to use Microsoft Account passkeys, Google wants you to use Google passkeys. Even if you have a dedicated USB device plugged in, browsers keep defaulting to the cloud accounts.
Bitwarden's approach is to simply hijack the passkey request before the browser can respond and throw itself front and center. It's a terrible hack but it works on every browser at the very least.
If these companies cared about their users more than they cared about throwing up walled gardens, they wouldn't put a USB key behind "Choose another method" -> "Dedicated device" -> "Security key" -> "Confirm" while offering one-click login with their cloud account. And they would offer a proper API for third party applications to integrate into the native passkey storage.
In practice we don't actually want the best security though. We frequently make concessions. I mean with my bank I don't want "the best" security. If I lose my credentials I don't want to go broke. If my credentials get hacked (especially if hacked by no fault of my own!) I want that money recovered. These things would not be possible with "the best" security.
In fact, in a different interpretation I would call those paths less secure. Ability to recover is a security feature just as much as it's not.
Both security and privacy do not have unique all encompassing solutions. They are dependent upon the threat model.
Importantly when designing things you have to understand modes of failure. When you design a bridge you design it to fail in certain ways because when/if it fails you want it to do so in the safest possible way. Why does this pattern of thinking not also apply here? It seems just as critical here! In physical security you also have to design things for both fail open and fail closed. You don't want you always fail close, doing so gets people killed! So why is the thinking different in software?
Not to mention:
How do I login from my Linux machine if I'm only using my iCloud key?
Your logic would lock me into the apple ecosystem forever and that's a worse security setting than anything else we discussed. Apple decides to become evil and I'm just fucked. Or swap Apple with Microsoft who is actively demonstrating that transition
In terms of security, yes. But not in terms of convenience.
I've decided to stop adding new ones. I'll just OTP 2FA. It's simple, reliable, and I can keep it in Bitwarden safely.
Safari on iOS can store and use passkeys from any app that implements the right system API, including the default Apple Passwords but also Bitwarden and Chrome.
For desktop, you can either use a browser extension provided by some password managers (such as Bitwarden), or if you're on a Mac, Safari and Chrome can access passkeys from other apps similarly to on iOS (but not as many providers support this API on Mac as on iOS, and in particular Bitwarden doesn't, so you'd have to use the extension for that).
[1] https://developer.chrome.com/blog/digital-credentials-api-sh...
I've never seen a legitimate use case where I need to prove my identity to use a website anyways.
Overall it’s not terrible but I think these edge cases are going to keep biting people and need to be addressed in some way. And yes I understand that I could use a Yubikey or Bitwarden or some such but the point was that I wanted to see how this flow works for “normal” users who just use the iCloud Keychain and the experience leaves something to be desired.
> So if I initially create an account from my MacBook and the passkey gets listed as “MacBook”, I then go to log in from my iPhone and it still uses the “MacBook” passkey because of iCloud sync. But this is confusing because I cannot have an iPhone key.
Now try using a Windows or Linux computer...This is why I strongly prefer to not use OSX passkeys. How the fuck am I supposed to login on my nix machines if you only allow me to enroll one passkey?!
The messed up thing is that the simplest backup option is a magic login link which is obviously less secure. Also you cannot sink a passkey between platforms unless you use a third party Authenticator so you have to have a backup method of some sort even if not for recovery reasons.
My only feedback is about the Quickstart of passkeybot, "feed this example into a good LLM with these instructions". I undeerstand the idea, but I was a bit shocked that the first time I see these sort of instructions is for an auth framework.
I am in the middle of writing a passkey-driven server dashboard app (native SwiftUI app, with a small server component).
In the future, I would like to use passkeys as much as possible, but they do present a bit more friction to users than Sign in with Apple. When I was initially learning them I wrote this up: https://littlegreenviper.com/series/passkeys/
[1] Spec author quote: "To be very honest here, you risk having KeePassXC blocked by relying parties." https://github.com/keepassxreboot/keepassxc/issues/10407#iss...
[2] https://www.smokingonabike.com/2025/01/04/passkey-marketing-...
[3] https://fy.blackhats.net.au/blog/2024-04-26-passkeys-a-shatt...
I personally think the ability to export+import passkeys is a good thing from a backup point of view, but he's not wrong in suggesting that companies actually using the high security features of passkeys will eventually block software implementations like these.
This isn't about vendor lock-in. Nobody is asking for KeepassXC to remove passkey support. This is about security software implementing an API and not fulfilling the expectations that come with such an implementation. To quote the comment you linked:
> That's fine. Let determined people do that, but don't make it easy for a user to be tricked into handing over all of their credentials in clear text.
> personally think the ability to export+import passkeys is a good thing from a backup point of view
It's not a "good thing," it's absolutely critical. If I can't back up my credentials in a location that I trust, then it's not an acceptable login method. What happens if my PC goes down and I couldn't export my data? I just can't log in anywhere? KeePassXC lets me do that, but the spec authors think it's appropriate to ban me for using it because it lets me manage my own data. That's bonkers.
1) that they're enforcing these specs for technical reasons, not because they want vendor lock-in
2) a result of these decisions in the long term is vendor lock-in
I think we're verrry slowly inching toward shedding all the security nerd self-indulgences and getting to what I think is the eventual endgame which PassKeys are just keys and ultimately a fairly user friendly way of getting people to use a password manager without it feeling like one. All the other features seem like noise.
No, that is absolutely not the point. The points of using pub/priv keys for asymmetric auth instead of passwords (symmetric, manually generated auth) are:
- Server-side (ie, central point) hacks no longer matter an iota from a user auth pov. No more having to worry about reuse anywhere else, about going around changing passwords, nada, because the server simply doesn't have anything that can be used for auth anymore at all. No more concerns about whether they're storing it with the right hash functions or whatever, they could publish all the public keys in plain text and it'd be irrelevant. This fantastically changes the economics of attacks, since now instead of hacking one place and getting thousands/millions/hundreds of millions of credentials you'd have to hack every single separate client.
- As a practical matter, the process means eliminating the whole ancient hodgepodge of password requirements (often outright anti-security) and bad practices and manual generation work. Everything gets standardized on something that will always be random, unique, and secure.
And that should be it. That's the point and the value, always was. The only goal should be to put a nice UX and universal standard around. But of course, modern enshittified tech being enshittified, they had to shove in a bunch of stupid fucking bullshit garbage like what you're talking about.
As stated by the spec authors on KeePassXC's bug tracker, open source software may be modified by the user, so cannot be trusted. The passkey proposal is for all of your keys to be managed by proprietary software validated by your phone or your computer's TPM module. That means one of three big, US-based tech companies will control all of the world's login data. Those 3 companies are all currently involved in the largest fascist-taint-tongue-polishing in US history, and we want to hand them control over the world's logins. That's a much, much bigger risk than some users doing something stupid.
The spec needs to be written with the assumption that the user's private keystore may be hostile to the user's own interests, because in the real world, it is. It needs to be written to mitigate damage to the user from a hostile keystore. Instead, the spec places total trust in the keystore. This is a fatal error.
But the existence of attestation means Apple could at any time in the future make attestation on by default and suddenly our devices control our secrets more than we do.
Is that "cannot be extracted" from JS only, or is this an actual device-locked, TPM/SEP-bound key like passkeys?
If it is, it seems kind of like the buried lede to me that there is a browser API that lets any website built its own completely unstandardized quasi-passkey system and lock the key to the current device.
And likewise you as the app vendor can know the key was generated, and that it works, but you can't[1] know that it's actually locked to a device or that it's non-exportable. You could be running in a virtualized environment that logged everything.
Basically it's not really that useful. Which is sort of true for security hardware in general. It's great for the stuff the device vendors have wired up (which amounts to "secured boot", "identifying specific known devices" and "validating human user biometrics on a secured device"), but not really extensible in the way you'd want it to be.
[1] Within the bounds of this particular API, anyway. There may be some form of vendor signing you can use to e.g. verify that it was done on iOS or ChromeOS or some other fully-secured platform. I honestly don't know.
the capability is there, but it would he massively inconvenient, since it requires a lot of lockdown
might be the next generation of anti-cheats though
Just to be clear, the PKCE secret can be the same for each initiation, but in the end its goal is to ensure that the first request matches with the last one. And yes, there is "plain" PKCE method but that is just for testing. SHA256 is the default one used to obfuscate the secret.
Of course if you trust the client (no bad browser extensions, updated browser) and have good TLS settings and no MITM risk and make sure the your IDs are single-use then it seems like that should be fine.
The code_challenge == sha256(code_verifier). You will share the code_challenge at the start of the flow.
I run an authentication server and requiring PKCE allows me to make sure that XSS protection is handled for all clients.
start
(1) Copy / paste example_http_server into your LLM of choice (use a paid/good model). (2) Prompt: Implement the HTTP handlers here for my project,..
Um, no? How about you give me real instructions on how to do it? I’m not going to delegate a security-critical task to an LLM. And since I need to review it carefully myself anyway, I might as well write it all by hand, right? Like, the whole premise is I just need to implement a couple of webhooks.
This is like those "contact your system admin" error messages. I am the system admin!
The LLM is only for converting the JS based example code into your language X and HTTP framework Y (instead of giving example code for every combination of X and Y).
The standard implementation is in a single file `http_server.ts`, which is around 200 lines of well commented code, with important logic commented (around 5 lines). The example code can be run locally with a few commands.
The repo also contains a sequence diagram [1], a description of the HTTP handlers needed [2], and a live demo [3] where you can see the request/responses.
Thanks for your feedback I have made this clearer in the readme.
- [1] https://github.com/emadda/passkeybot/tree/master?tab=readme-...
- [2] https://github.com/emadda/passkeybot/tree/master?tab=readme-...
- [3] https://demo.enzom.dev/
I'm curious on why there would be any legitimate reason for that. Security wise it should not happen, it's just some implementations being crappy or some bad practice like reusing same passkey with different devices ?
Example: this article.
A bit the same why although I love the keychain in macOS, it also makes me uncomfortable. Lose your phone and laptop in a theft or fire and you are locked out from your Apple account. Goodbye online presence.
That is why you should ship a pristine HTML+CSS+JS environment that can use subtle web crypto. YOU show what is being signed. And then the device can sign its hash using the secure enclave.
And you CAN do attestation even on consumer devices, by using the Device or AppAttest framework (I think that’s what it’s called). I did it myself in our app. It does show up 100% of the time but when it does it’s useful.
PS: being the web3 / blockchain geek that I am, I will tell you stuff that triggers anticryptobros on HN.
The Web3 ecosystem already has a standard called EIP712 for signing structured data. If you want to stick to standards, use that!
The secure enclaves all use P-256 (sometimes called R-256) while Bitcoin / Web3 uses K-256 (the Koeblitz curve, they distrust the NIST curve or whatever).
So that means you’re going to have to use Abstract Accounts and the new precompiled smart contracts to verify P256 signatures, which only Binance Smart Chain and a handful of other chains have deployed. Luckily BSC is the most widely used chain by volume and has plenty of money sloshing around so you can build your trustless programs there. If you want to be totally trustless — LET THE SMART CONTRACTS GENERATE THE CHALLENGE TO BE SIGNED BY THE AUTHENTICATOR. Then have it sign the temporary k256 public key (from the keypair) to use, as long as your session is open you can then add use your private key to sign transactions. As usual, do this for small amounts per day, transactions that move larger amounts should still require use of multisig keys etc.)