As someone who uses Mac, Linux, iOS, Android and Windows, I want something that lets me sync my authentication methods across all of them, and the KeePass ecosystem (even with 2-3 different apps) is the only game in town. I absolutely do not want to use a cloud-based or vendor-owned password manager, period.
Doing user presence verification gets in the way of the user doing what they want to do. Not doing user verification lets an assertion be made in the background - possibly by a malicious script. Is that tradeoff worth it?
Letting the user export private keys is absolutely important for backup, and transferring between devices and services. But if you can easily export a private key then cloning it becomes significantly easier. Are trivially cloned keys a risk we're willing to take?
The answers to these depend on the user, the provider the application and their combined threat model. Sometimes those risks are totally fine. Other times, they're totally not. The standards could open up more options and let the user or sites negotiate what they can and can't do. And the cost in that direction is that now the overall concept is more complicated, and we requires both site operators and users to learn what those tradeoffs involve - with an almost certainty that security will be weaker as a result.
This isn't a cut and dried issue, with clear 'right answers' and villains. Tradeoffs exist in every direction, and there just aren't any security free lunches to be had here.
A more realistic scenario is where the user has installed a malicious extension that can exfiltrate the cookies. Requiring reauthentication makes an exfiltrated cookie less valuable. While the extra auth step can be annoying, it also provides an opportunity for additional safety checks (like validating that the IP of a request matches that of the recent auth).
but they want to get rid of it and use passkeys instead.
> Are trivially cloned keys a risk we're willing to take?
The point is that for my passkey stored on my device, I should be the one who gets to answer those questions.
""" I compiled the client source, but every time I try to connect to a server it kicks me out or tells me to get a 'blessed' binary. What gives?
It's possible to modify the client source to do lots of tedious tasks (like aiming, dodging, that sort of thing) for you. Since this gives you a big advantage over a mere human, netrek has a way of knowing whether you have a client that was compiled by the netrek Gods or by you. If you compiled it, netrek will assume it's a cyborg, and will kick you out if it's not cyborg hours. """
> You absolutely should be preventing users from being able to copy a private key!
Huh? This is dumb. Users should be able to do whatever they want with their private keys. Looks like the post in on point about the push to take away control from the user. This is an anti-feature that should not be sneakily accepted as a security feature.
When DRM-like stuff is shoved on the user in the name of security, it turns into the means to control the users by whoever makes those decisions for them. This should always be opposed.
Having requirements like "users should not be allowed to do X" stinks to extreme.
> The unfortunate piece is that your product choices can have both positive and negative impacts on the ecosystem as a whole. I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
If a passkey can have its private key exported, by anyone at all, then this property is lost. I do not want access even to my own passkey private keys!
I certainly don’t love having a few gatekeepers in charge, but the protocol (currently?) does not really support a good alternative. And doing better is hard!
But it's not by anyone at all. It's only by users that have unlocked their database. I really don't see the attack vector here.
It's not like the Apple Keychain at all, since your interaction with Keychain is very different than KeePassXC, which makes the locked vs unlocked state very explicit (and you're almost always auto locking anyway), whereas Keychain is something happening in the background that sometimes prompts me for my password/fingerprint. I have no idea what the state is there and would be very annoyed if someone could leak all my secrets just by accessing my computer.
With KeePassXC I'm always aware if it's open or not, because I can't use it without knowing that, and I had to make a very explicit opening of it. Because it uses local files and not the cloud, it's very important to me to be able to import and export the contents. Without that ability, I will lose access to my passwords.
It doesn't mean it should be easy to do, but it's also completely unacceptable to make a requirement like "users are forbidden to access their private keys".
> by anyone at all
What do you mean by anyone at all? By the owner of the private key. Not by anyone.
If I log into my computer and turn my private key into a plaintext blob, as a file or a Python object or something on a USB stick or a QR code that I photograph, then anyone who happens to have compromised my computer at the time has my public key, too. Even if I subsequently fix the compromise, they still have my public key.
I do not want this to happen.
Otherwise, create multiple passkeys. Create a passkey in your ios keychain and in your Keepass app. This walled garden has a gate, walk through it.
A more apt analogy would be if the http server sent an 400 to all requests from browsers known to support ad blocking.
“The following list of passkey providers have not implemented User Verification in a spec-compliant manner.”
It's just that it's not an unqualified win to allow sites to block passkey apps either. If we allow that, we can get to a place where sites block apps for the wrong reason, or it becomes more expensive to develop passkey apps so there is less competition for secure passkey apps.
It's not just whether it's a good idea to allow unencrypted exports. It's whether it's a good idea to give websites a say in how we manage credentials.
Based on this article, I assume the author is also raging about companies using “do not copy” physical keys, or dictating the use of a key card to enter.
Mind, I’ve no idea how well it does so. Every so often, my passkeys fail in some incomprehensible way, so I’m not very comfortable with the concept.
To be clear, the only thing KeePassXC is "out of spec" about is that where the spec says "you must not let the user do X, Y, and Z with their own data", KeePassXC will let you do those things, after a warning.
The credential is not only the user's data. The credential is an agreement for access between the user and the service provider.
The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.
Wait, really? Does this work both ways? Do I get to demand that the service provider store the data it collects about me in a way that can't be extracted? Oh, apparently not[1]...
[1] https://www.technologyreview.com/2023/07/17/1076365/how-tech...
I'm so glad people never crammed that into the TOTP protocol. You have recovery codes you can save (which are arguably just as sensitive as the TOTP secret) and a lot of apps let you export the secret entirely.
I used an app on iOS that doesn't let you export them, and it took hours to migrate each entry one-by-one to my new Android device. Even with recovery codes, it was a pain to log in to each site and drill through their menus to disable and set up 2FA again. I should have been wary of that.
The credential is, in fact, only the user's data. How does it even make sense that a credential could be an agreement?
> The service provider has every right in the world to demand the user prove that they are securely storing the credential in a way that can't be extracted.
No, nobody has any right to dictate, or even know, how my device stores my data.
Their customers' accounts, on the other hand, are a different story. They should have freedom to choose. Companies that try to restrict that freedom should be punished in the market, or, in cases of monopoly, by the FTC. I suppose that doesn't mean it definitely shouldn't be an option in the spec, though..