> Reversible USB-A. Well this maybe wasn't the most critical feature anyone expected, but, consequence of our new PCB & case combined construction, it was easy to make USB-A reversible. So why not!
- Are drivers for this already installed as part of desktop Ubuntu 20.10/Windows 10? Any driver installation will absolutely make this a no-go for family members.
- Is additional software required for anything non-techies might reasonably want to do with this device, including resetting it, adding an entry or checking which entries are already on the device? The ideal would probably be if the device acts like a USB stick, with entries being shown as .bin/.txt files which can be manipulated in the normal ways.
- How easy is it to create a backup? The ideal (for non-techies) would probably be something like plugging a device into a PC and simply copying files across. Ditto for duplicating to another device.
- Is there anything else which would likely stop non-techies from using this for basically everything they care about?
In general, you cannot (by design) back up these devices; if you could, that would defeat a lot of the security they provide. That means that if you lose it, you will have to find a way to get 2FA disabled for each and every account you enabled it for. Some orgs will have pretty onerous (but necessary!) processes for doing so, like having to provide government ID or physically visiting a brick-and-mortar location to prove your identity and ownership of the account.
Some sites will allow you to simultaneously enroll two devices, so you can keep one as a backup, safe somewhere (though not too safe; if you were to, say, put it in a bank safe deposit box, it'd be a pain to fetch it any time you want to add a new account). But many sites only allow a single device to be enrolled.
Some (like Yubico) let you purchase a "cloned" set of devices, where you can get two (or more) devices with the same keys on them, so you could actually put one of them in a safe deposit box as soon as it comes in the mail to act as a backup. That also solves the issue of some sites only supporting one device, as all of the devices in the set are effectively the same device. However, it doesn't appear that this is an option with the Solo keys (not certain of this; happy to be wrong about it; it's possible that you might be able to wipe the key material off new Solo keys and put identical copies of new self-generated material onto more than one key). On the flip side, if someone steals your backup key, it becomes harder to deal with the situation; with distinct keys, you can just revoke access to the stolen key. But with cloned keys, revoking access to the stolen key will also revoke the key you use daily.
I just wanted to bring this aspect up, because people unfamiliar with these devices need to understand the consequences if they lose their key; it can be a huge pain in the ass to rectify that situation. This might be an understandably big turn-off to non-techies who are just looking to add a little extra security, but not a big maintenance burden and difficult failure modes.
For WebAuthn (the actual standard for how to do this which is what you should be rolling out if you have a greenfield authentication environment that doesn't already do U2F today) the specification explicitly says:
> Relying Parties SHOULD allow and encourage users to register multiple credentials to the same account. Relying Parties SHOULD make use of the excludeCredentials and user.id options to ensure that these different credentials are bound to different authenticators.
https://w3c.github.io/webauthn/#sctn-credential-loss-key-mob...
I'm aware of (and along with many of its other users annoyed that) AWS only permits a single authenticator. If there are other popular sites that do this, this is no worse a place than any other to say so.
FWIW I have two (or more) FIDO authenticators with Google, GitHub, GitLab, Facebook, Dropbox, Login.gov and Digidentity (the Gov.UK verify provider)
Ugh, I hate these. I want to use u2f, but I am not willing to risk being locked out of my account if I lose the key. So I only enable it if there is some other 2fa I can enable (either adding a second key or totp).
Wait, they do? How?
I would love to do this, but I can't find anything relevant on their website.
Get a new key, revoke old key, switch to new one?
Unfortunately many sites suck at this. AWS, Twilio, PayPal all suck.
I deal with this by always adding a second "cold" key when services allow multiple keys, and keeping this cold key somewhere secure as a backup spare. So if I do, say, lose my primary key, I can at least pull out the spare key to reset and de-associate the primary.
TOTP and any sort of one time code authentication are just as phishable as passwords. Perhaps the biggest benefit for most people using U2F or FIDO2, is the large resistance to phishing.
This is because of how the whole ecosystem has adopted FIDO2. When a FIDO2 key signs an assertion for a website, it includes the domain in the signature base, e.g. "example.com". The browser enforces that the request to the FIDO2 key always uses the correct name of the domain you're on.
If you accidentally go to a fake website, "exaample.com", then the key will make a signature for "exaample.com", which is invalid for "example.com". Nothing can be phished to get around that, unlike OTP codes.
Even if you have other 2FA options linked to your account, as long as you're using your FIDO2 key, you gain this benefit. Very strong benefit for both individuals and enterprises.
This actually isn't true - the result is even better.
When you visit fake.example instead of real.example there are two scenarios: For FIDO2 (like this product) with usernameless mode, just as with a modern iPhone or fancy Android with fingerprint reader, the authenticator knows perfectly well that you've never registered at fake.example so, you can't very well authenticate to it, you get an error.
With FIDO1 (or on sites that don't use the usernameless feature anyway) the authenticator has no idea you've never visited fake.example... but the site has to hand over an opaque ID, a large binary blob. This is (either directly or in effect) actually your private key, encrypted in AEAD mode using a symmetric key known only to your authenticator. Another ingredient to this encryption is the domain name. So either fake.example hands over a random blob, which is gibberish, or they hand over a genuine real.example blob... but they're fake.example so the decryption fails. You get an error.
The way I found this out was by trying it, I built a toy site with WebAuthn authentication. If I run the same code, on another site I own, it gets an error in the Javascript telling it that apparently I don't yet have a Security Key enrolled for this site, maybe it should enroll me first. If I tell it to pretend it's a different site, it gets an error saying no it isn't.
[ The bad guys could enroll you, but now you're really signing into their web site. Which is cool, but, doesn't actually help them phish credentials for the other site ]
Every layer buys you a little bit (or a lot) more, raising the skill level and/or cost. Fido2 is much harder to phish but of course weaknesses may be found in the future.
- Yes, you have everything you need on every major OS/browser
- These devices are zeroconf; resetting it actually kills a security feature (key use increments) aiming at cloned devices
- The ideal backup for this is to have a separate key, both authorized. They don't need to have the same material, in fact, cloning it would be considered a weakness (how do you know someone hasn't cloned it without your knowledge?)
This in particular is important. Security is only as strong as your weakest link, so any backup methods (e.g. "forgot password" flows) might as well be your primary method, if you actually care to strongly secure things.
Adding another (or more) key gets you same-security redundancy if one fails or is lost. Nothing else will achieve this.
Degrading to "forgot password" may be entirely fine for [person's] use of a security key, but you must be explicit about that decision, or it's mostly security snake-oil.
2. No software is required - far easier to use than you describe. You insert it, tap the button when prompted, and that's it. The token will decrypt an wrapped key (held by the remote service) using a hardware backed key, and sign an attestation using it. This attestation is tied to the domain name and URL scheme being accessed, so it "prevents" phishing as you can't trick users into relaying useful tokens.
Note for FIDO2 there may be software to help manage more complex setups like "no username and password needed to login". If you're talking U2F (i.e. just 2FA), no software required.
3. You don't. There isn't anything to back up. You cannot export the internal key state, but services you use hold the (wrapped, encrypted and authenticated) key used for their service server-side. Your device just decrypts it, uses it, and discards it. You do need to think about backup, but you do that by enrolling 2 or more U2F/FIDO2 keys on each service you protect. That's the downside - you need to remember to enrol both keys on each service, every time you make a new account you protect with U2F.
4. Not really, beyond support at service side being limited (mostly) to big security-aware services.
While I understand why this is and which threat models this addresses, I still think that this shouldn't be an all or nothing proposition.
In the general case, meaning not for people with access to "secure systems" (corporate) or who are high-profile enough to have reason to believe that they themselves may be a target, I think that having "less secure" keys which can be cloned by "non-techies" might still be an improvement to the overall security of the internet.
I didn't do any research, but according to the linked Kickstarter page:
Solo V2 greatly reduces the risk of security breaches, as over 80% of all breaches are caused by passwords compromised through phishing email attacks.
So physically stealing credentials isn't as widespread a risk as phishing, which doesn't really surprise me. Therefore I think Webauthn with "cloneable" keys would be a net positive for "regular people".
This wouldn't preclude "techies" from using more secure, unclonable keys like the Yubikey & friends. But my grandma could also use a "less secure" one without the risk of having to go through resetting 100 different sites and would be able to setup a new key just by having me walk her through the process of restoring a key from a backup. I'm a "techie" and even I would like such a key for use on random "must absolutely register" websites.
Of course there's the issue that if the key is lost you can't easily revoke it. But even with the proposed system of having a backup key registered or going through the recover account process, as long as you don't actively go unregister the lost key it's still registered and working. So if the authentication is based on some sort of counter, the process of effectively disabling the lost token shouldn't be any harder in this configuration.
> - Are drivers for this already installed as part of desktop Ubuntu 20.10/Windows 10? Any driver installation will absolutely make this a no-go for family members.
On windows10, yes. I haven't tried it on Ubuntu 20.10 yet, but I think FIDO/WebAuthN will Just Work. (PIV will likely need custom software, but if you're using PIV, you probably know what you're doing).
> - Is additional software required for anything non-techies might reasonably want to do with this device, including resetting it
I mean, you rarely want to reset your key to start with, but at least for the yubikey, this requires external software last I checked. Maybe there's a button hidden in the chrome browser UI, but I've never found it. You need yubikey manager (which is yubikey's tool for doing various operations on the yubikey, like configuring what the touch button does, etc...).
> adding an entry
This is done entirely through your browser when setting up 2FA for a website that supports FIDO/WebAuthN. This is done entirely transparently for the user.
> or checking which entries are already on the device?
I don't think you even can do this. I'm not entirely sure how FIDO works, but I think the key is basically derived from some kind of "master" key combined with the domain you're connecting to. So the key doesn't actually have any memory of which servers it ever connected to.
> - Is there anything else which would likely stop non-techies from using this for basically everything they care about?
Well, technically, I don't think there's anything. WebAuthN is rock solid and the UX is really as good as it gets, IMO. The problem is, for many people 2FA is a hassle they don't want to go through. They don't see the need for it. And TBF, for many people, they might be right: they might not need it. So why go through the hassle?
Indeed, this is one of the most elegant features of U2F - it preserves security and privacy even in relatively adversarial edge cases.
Your token has a hardware-backed long term key in it (well, one for encryption and one for authentication). When you enrol on a website, the token generates a new asymmetric keypair internally, then encrypts and authenticates it with the long lived keys. The registration bundle sent to the server is called a "key handle", but is typically just the a hardware wrapped key.
When you visit a site and log in, on the 2fa prompt, the site sends the encrypted wrapped key back to the browser, and it tries verifies it's a valid key, then decrypts it, and does a challenge-response authentication that's tied to the HTTP origin (domain and port) of the request.
What's quite nice is that (outside of a few corner cases like looking at counter values and trying to correlate), you can safely use one u2f key with multiple accounts on multiple services, and none can be linked by the u2f key. (Of course they can be linked through other means, but the token won't be that link)
This is bananas. We absolutely should not be recommending them to normal people until security researchers come to their senses and fix this problem.
It should work out of the box on windows 10 and ubuntu. Microsoft is actually one of the largest forces behind the FIDO2 spec. Windows Hello implements FIDO2 too so you can use it dven without a dongle but just with your biometric reader on our laptop
Crypto fan or not, those devices are amazingly secure, and certainly hold billions out there. I think it is open source too.
It sounds like this is a perfect solution for people at high risk of phishing, a good solution for somewhat technical laypeople with something important to protect which supports this (like a bank account), and too much hassle/risk of losing access/lack of support for most people/use cases. Does that sound fair?
Suppose you're Twitter. If every Twitter employee has a FIDO2 device and they need to tap it to begin their work day, and to confirm any important actions like "Block YetAnotherNazi" or "Validate that this Twitter account really does represent Jim's 24 hour Celery and Dog Collar Deliveries" then instantly a bunch of your security problems disappear, and all you need are your existing procedures that stop random people walking into your offices off the street and pretending to be employees, which, I'm going to guess, is already a problem you've got at Twitter.
I can't see any reason a university wouldn't do this for its students for example. Or a hospital for its medical staff. Or a police force for... all the cops. These are very easy to use, with that one sharp edge of "What if I lose it?" which is not a problem if your organisation already has procedures to ensure only the right people get physical ID.
And when setting up any 2fa, need to have two keys and enroll them both, to give a chance of account recovery.
edit I purchased one for my wife and only properly understood when it arrived that it would need to be first used for all my 2fa, and then when helping her get setup I would need to have mine handy as her backup too.
To make a backup, you get 2 different keys and set them both up. The second one you keep somewhere safe, for example in a safety deposit box at a bank.
A bit more convenient than having to use the YubiKey apps for TOTP and such.
The actual public key used for logging in to a specific site is completely random.
Optionally, the website can ask for "attestation", which is intended to prove that the public key is from a specific vendor/model. To make this also unlinkable, devices are supposed to share attestation keys in batches of 100k units.
I'm not so sure they cannot associate.
When you register your key on a service, the service sends an application identifier down. Your key then creates a fresh key pair and sends a handle (name for that keypair) and the public key back to the service. Later when you authenticate, the service sends down the handle and challenge. Your key uses the handle to find the right private key and sign the challenge, which is checked by the service using the public key to confirm that you do indeed possess the private key.
Since fresh keypairs (and handles) are generated each time you register a key, the service can't use the registration process to identify whether you're re-using a physical key (it should look no different from using a new physical key).
Alternatively, a service can check if your physical key contains other handles by initiating requests with them, but this will almost always be very obvious. E.g. If they suspect you are Bob, they could request a challenge with _Bob's_ handle.
But they can't just spray handles, since if they do this when you're not expecting to authenticate anything, it will be quite obvious: your key, and any intermediate applications like the browser or OS, will behave like they are authenticating. With a well implemented key, you will be safe as long as you don't touch the key on spurious authentication attempts. The service won't be able to tell whether the key doesn't have the handle (and is ignoring the challenge) or if the key does have it but you just didn't touch the key because you weren't expecting an authentication request.
This leaves the service with the final option of only doing the test while you're trying to log into what they suspect is an alternate account. That means they can only do one check and will have to be fairly certain about you being Bob lest you be tipped off. If they guess right, you won't be able to tell a difference but the service will know that you can authenticate with Bob's handle while trying to log in as Alice. If they guess wrong however, you might become suspicious since the authentication will fail and in a non-standard way (e.g. the key can indicate that it has received a foreign handle).
All this means is that, if the service is trying to tie identities to "serve ads", they won't have a feasible way to do this with U2F. But, if it's a threat actor with nation state level resources and lots of time then... well... nothing consumer level will really help you (though U2F still puts up a decent fight).
Yubikey outlines their method here[1]. They generate a different keypair for each website public key and have the server store an encrypted and authenticated copy of the private key using a single on-device key. So baring breaking the underlying primitives the server will only have a site-specific public key and a site-specific encrypted blob.
[1]: https://developers.yubico.com/U2F/Protocol_details/Key_gener...
But it doesn't do OpenPGP, I rely on that way too much sadly. Not just for SSH which supports fido2 now but also for file encryption and my password manager.
If they add that in the future I might jump ship.
Age isn't there. It does NOT have good (read, right now, really, any) support for hardware tokens. I'm skeptical of what I've seen. And age still punts on authentication. And PIV still doesn't have decent keys at decent sizes standardized and thus is awkward to use in practice.
I'm really not convinced, and I really want to be. I wrote a bunch of forward-looking Rust, and then permanently backburner-ed it because age/yubikey just isn't there yet.
Using FIDO2 for SSH, when you're used to the portability and versatility of OpenPGP, stinks. I can use my Yubikey perfectly to do SSH and GPG in Windows. I can forward SSH agent and GPG from Windows to Linux such that it is identical in functionality to me sitting in front of my actual Linux box with my Yubikey plugged in. I have never seen that done with PIV.
I have this extreme fear that I'm going to wind up with four solokeysv2 that just sit in a drawer.
I can't see why one would bother. OpenPGP works and is a published standard that has been around forever. Age is just pointlessly different with fewer use cases. This has an example of where age is objectively worse:
People like to dislike PGP and replace it with a myriad of different solutions. But PGP is everywhere and awesome. It's very wide spread adoption is invaluable. I really don't want to see it replaced with zillions of different bespoke solutions.
The GPG toolchain is pretty great for file encryption and I use it for my password manager too (which is indeed ZX2C4 Pass - passwordstore.org ). I don't want to use a fork using 'age' because I rely on the GPG version on my mobile (using the excellent OpenKeyChain app and the passwordstore app which talks to that).
Also, I log in to SSH servers I don't have the ability to install stuff on, like the ILO on my servers. And again on mobile, there's bridges to OpenKeyChain for SSH in e.g. Termux which work great including agent forwarding. But not for PIV. So it's not a complete replacement.
Sorry but without OpenPGP it's a non-starter for me. I understand I could use Solokey if I make a lot of changes in my personal setup and make some compromises especially on mobile. But why would I? I can just continue using Yubikey :) Don't forget you're in a heavily contested market, you should be better than the competition.
I'd like to have an open-source authentication key but in the end it's a tool to me. Open-source is a 'nice to have'. It's not worth it to me to deviate too much from my existing workflow.
I am willing to use what I can understand, backup and operate, and yubikey+gpg seems to be it because of this guide.
Anything practical for what you've mentioned?
Edit: for ssh and encryption.
Would it be possible to make a custom firmware that support RSA-4096 PIV?
I hope it doesn't take that long, but there's a history of delays, unfortunately.
"I've been working on Solo for almost 3 years now. It started back when I was in college and on a whim, ordered a run of 1000 security keys that I designed and then shipped them all to Amazon. "
Hm... not sure I can trust my keys to something developed on a whim by a college student.
TLDR: it's not as easy as thought. It will be easier for the new one.
I should probably email them about this at this point, but I think it's weird they haven't explained the "tampering resistent" part in their marketing material in any detail.
There's still a lot of things that need to go right for the whole system to be secure, but "everything happens inside one chip, and we cover it in epoxy" seems pretty reasonable. If you can get rid of the epoxy, the only tampering I'd be worried about is removing capacitors for power supply glitching. Power analysis can still be done on an uncompromised device via the USB port (capacitors will make this harder, but may not rule it out).
To go beyond this, you'd probably need to decap the chip. I haven't seen anything about an active die shield in the documentation for this chip, but we're now well beyond the scope of epoxy tamper resistance.
Edit: No die shield, but apparently "cryptographically sensitive" signals and bits have additional out-of-band signals and bits to make shenanigans more difficult. Certainly not perfect, but "not completely terrible" seems like a fair assessment.
It's worth remembering the threat model for U2F tokens (let's set aside PIV, FIDO2, etc for the moment) - if the attacker has physical possession then they're into your account. Game over. As the authentication is to tap the button.
Sure you can add PIN via FIDO2 (then these protections make more sense), but I can't see any particular threat whereby you would be concerned about this threat under normal circumstances.
U2F helps normal (and expert) users resist phishing attacks, credential relaying, and avoid keyloggers etc. It doesn't protect you against in-person physical adversaries who can steal your things, or take them against your will.
The only edge case I can see where this matters more is if a user leaves the token unattended (try not to! Put it on your keyring, though admittedly your backup token probably is at risk a little here) and an attacker can covertly extract the keys and leave it as found, such that the user is unaware. But at that point you are dealing with adversaries in the real world, and most users have already lost at that point (passwords written down, etc.)
The epoxy can be chemically dissolved, but would deteriorate the outside of the device as well. It the epoxy isn't completely cleaned out, then refilling it with new epoxy would look messy. With great care and skill, it could be done with little damage, but would be time consuming.
they're claiming tamper-resistant, not tamper-proof. and counting on the epoxy for that seems reasonable to me.
This is an LPC55S69. So it's open source firmware, not open source hardware.
For now, what we mean by open source hardware is on the one hand that all components are freely available (without NDAs, which nearly all secure elements entail), and on the other that the schematic of the device is open source and passes OSHWA Certification (the CERN license https://ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-ve... is relevant here). This means that you can in principle build a device yourself. The certification will be done post-campaign (we want to avoid copycat products appearing before ours is available in the open market). Like we did with our three previous keys (e.g., https://certification.oshwa.org/us000155.html and https://github.com/solokeys/solo-hw).
This is probably the right tradeoff for most users. Solokeys has done a great job of providing continuous support for all of their products, and their software stack has been open source since the beginning. That (combined with the low price) makes them my first choice for a hardware security token.
It's more open source though. Yubikey open sources some components like that PGP applet but not all.
In terms of features, CTAP v2.1 (https://fidoalliance.org/specs/fido2/) is still draft only, but yes both v1 and v2 keys support hmac-secret and credential management. We could add authnSelection and authnConfig, but not clear if any browsers actually implement/use it.
The major new feature is PIV.
Also if someone hijacks your account using bruteforced recovery codes and/or email.
Also if the servers are compromised or account data leaked.
In short, it protects from some forms of phishing.
(I'm not trying to criticize FIDO2, just pointing out what to expect from it)
And hopefully recovery codes have maximum retry count?
...but less safe than an external token if someone steals your laptop with the FIDO2 key in the USB port.
Yet, this are really very minor improvements to the (sorry) state of web and desktop security.