From what I understand, this is an API to write applications against a common interface, which can run on different hardware devices. An abstraction layer for security key apps. Similar to Java Card, but in a more modern way. Is this something that would or could be compatible with Tillitis?
Buy blank cards, write your applet, test in an emulator if you want, push to card, test for real with your software that talks to the card, profit. Be aware that if your goal is to write custom cryptography implementations in Java on the Javacard, these will be prohibitively slow. No need to take my word for it, Niels Duif did exactly this: https://research.tue.nl/en/studentTheses/smart-card-implemen...
> Java Card proves to be a worthless platform for high-speed cryptography. Despite the > speedups, generating a signature takes more than 28 minutes for a private key of 254 > bits.
How is crypto done then? JavaCard provides APIs that do it, but these call implementations that either use coprocessors, or contain optimised implementations in the mask ROM. You can't program a mask ROM without doing a production run of smartcards in the hundreds of thousands. Small scale, this isn't possible.
HSM vendors will often sell SDKs for custom code, which you can add to certain models. The barrier to entry simply being that you need to buy an HSM, which isn't cheap. It can be done, however, and on the plus side in my experience of Thales HSMs this means actual C code, meaning performant implementation is possible.
Note that "production ready" does not equate to "follow a YouTube video and write 17 lines of TypeScript." You need to know Java, you need to know crypto, and you need a few bucks to throw at the appropriate hardware. That said, the entire US DoD is built on JavaCard so it is as production grade as you can get.
Or is it’s function something else ?
Tillitis Key’s design encourages developers to experiment with new security key applications and models in a way that makes adoption easier and less risky for end-users.
It offers both security and flexibility by being end-user programmable while also preventing applications loaded onto the device from knowing each other’s secrets. During use firmware on Tillitis Key derives a unique key for each application it runs by measuring it before execution. This is done by combining an application’s hash value with a unique per device secret. Applications are loaded onto the device from the host computer during use, and are not stored persistently on the device.
A user- or host-supplied secret can also be mixed into the key derivation function, providing further protection. A sophisticated physical attacker should be assumed to have knowledge of the target application’s hash, and will likely eventually succeed in extracting the UDS from the hardware. By adding a host-supplied secret, knowledge of the application used as well as the security key’s UDS is not sufficient to produce the application secret. This makes the security impact of a lost or stolen Tillitis Key less than for conventional security keys.
Device applications can be chain-loaded where the first application stage hands off its secret to the second stage. This improves user experience as it makes it possible for the application secret (and its public key) to remain the same even if a device application is updated. It also enables developers to define their own software update trust policies. A simple first-stage application might do code signing verification of the second stage, whereas a more advanced one will require m-of-n code signatures, or a Sigsum inclusion proof. Sigsum was designed with embedded use cases in mind.
Tillitis Key is and always will be open source hardware and software. Schematics, PCB design and FPGA design source as well as all software source code can be found on GitHub.
(Full disclosure: I'm Fredrik Stromberg, cofounder of Mullvad VPN and co-designer of Tillitis Key)
Also curious if there are plans to support BLS signatures natively?
I’m probably not understanding something, so I’d love an explanation (preferably one that non-cryptographers understand)
>> ... this is basically like a yubikey ...
> ... new kind of USB security key ...
The things you have listed are indeed very nice, but they are not new kind, as they are available elsewhere.
Can you give a bit more compare and contrast to the original question?
Again, thank you.
To clarify, this secret does not affect the program's hash, right? (e.g. to prove liveness, the parameter is a nonce to be signed with a deterministic private key)
> It offers both security and flexibility by being end-user programmable while also preventing applications loaded onto the device from knowing each other’s secrets. During use firmware on Tillitis Key derives a unique key for each application it runs by measuring it before execution. This is done by combining an application’s hash value with a unique per device secret. Applications are loaded onto the device from the host computer during use, and are not stored persistently on the device.
So the idea here is:
* General purpose, reprogrammable security coprocessor
* If you save secrets with application A, then install evil application B, it can't access the secrets from A.
* And if you revert back to A, those saved secrets will still be there.
* Therefore, it's more practical to run two different applications - and safer to experiment with your own applications, because you won't lose all your website logins.
* And if you revert back to A, those saved secrets will still be there.
What stops app B from pretending it's an app A ?
It also perform a measurement of the application being loaded. And the measurement together with the Unique Device Secret (UDS) will generate the primary secret applications can use to derive keys etc it needs. This means that you can verify the application integrity.
This is very close to, inspired by DICE: https://www.microsoft.com/en-us/research/project/dice-device...
Maybe it will be ~YubiKey plus extras?
My point is, while I don't ascribe to the extremes of pro or anti VPN sentiments, having a good understanding of what services like this can and cannot do and performing rudimentary yet essential security and privacy risk asessment is essential before trusting them with all your traffic.
We are working on this as part of the System Transparency project.
https://system-transparency.org/
Disclaimer: I work on this.
Beyond this Penetration Testing reports on the Mullvad infrastructure is public.
The most revolutionary thing you are doing in my opinion is "registration" and email free account management and accept various forms of payment. You are way ahead of your time! Other apps and sites outside of VPN services could do so well to follow your example.
Mullvad means mole (for the tunnelling more than planted spy connotations, hopefully), and the "Tillit" part of the name means trust.
They're working on some IKEA style naming, which I enjoy.
System Transparency: Mullvad's security architecture we'll use to eventually make our running VPN systems transparent.
Sigsum: A transparency log design with distributed trust assumptions (witness cosigning).
For non-speakers; "Glasklar" means literally "glass clear", but makes more sense to explain as the phrase in Swedish equivalent to "clear as day".
One thing I've wanted for a while is a way to properly backup a webauthn token. An approach I discussed a couple of weeks ago [1] was:
1: Generate on-hardware webauthn master key on device A.
2: Generate on-hardware key-pair on device B
3: Export B’s public key, import to A
4: On Device A: Encrypt master key with B’s public key
5: Export encrypted master key to B
6: Decrypt on B
I guess this would probably be possible with this device? Perhaps there are some even more clever way to do it.
Yes, that'd be possible. I don't know how webauthn works, but if it relies on ECC you could probably do ECDH between all security keys you wanted to carry your master key, and then use the combined ECDH values as the master key.
Cool! It was a while since I looked in the standards, but I think there definitely is support for using ECC based algorithms.
Looking forward to follow your journey!
Not to mention that in this day and age every piece of hardware has software at its core, so open source hardware does not save you from also writing the code that runs on it. If anything developing open source firmware is actually harder, because most chip vendors expect your product to be closed source and want you to sign countless NDAs before you can access a datasheet or even just buy the chips. You are restricted to using older and/or more expensive parts whose documentation is freely available; it's the exact opposite of the software world, where the latest and greatest is one npm install away.
Which FPGA models support _attestation_ of the loaded bitstream? Do any?
> How does the user verify that the FPGA loaded the expected bitstream instead of something with a backdoor?
It's a Lattice ice40up5k, which contains a programmable and lockable NVCM memory in-package. The engineering samples we handed out today at OSFC store the FPGA configuration bitstream on a SPI flash memory though.
> A DICE chain that is not rooted in physical, immutable hardware isn't very useful.
When we start selling them we'll likely sell both security keys with pre-provisioned bitstreams in NVCM as well as unprovisioned security keys so you can provision your own.
[1]: An Autonomous, Self-Authenticating, and Self-Contained Secure Boot Process for Field-Programmable Gate Arrays, https://www.mdpi.com/2410-387X/2/3/15
I haven't seen this feature yet, but I desperately want it on every FPGA I use. NVCM eliminates most of the benefits of using an FPGA...
There are things developed by the CrypTech project I would like to try and reuse for TillitisKey.
(via https://news.ycombinator.com/item?id=32896658, but we merged that thread hither)
What's the practical application of this key?
You can read more on tillitis.se or in the comment I made below.
Tillitis Key will allow you to chain-load applications. This means that you could have a thin loader which does code signing verification of the next application stage, and hand off the secret to it. Basically it's a trust policy that defines under what circumstances the next application stage gets the secret.
Another trust policy the loader could have is requiring m-of-n code signatures, or perhaps that as well as transparency log inclusion. Check out sigsum.org.
Also the fact it doesn't emulate smartcard means every single software supporting it would have to make a special client so yeah, that's a problem.
"Just" smartcard allows for at the very least GPG signing and SSH agent without much fuss, and also HTTPS client cert auth
It's OK to say this has a serious limitation in that it can't easily support updating applications, but that hardly rules out it being useful at all.
Thanks in advance.
The key is embedded as a luks header into the partition.
The information about the key and the device is passed to initrd through /etc/crypttab for unlocking during boot.
I wrote a couple of posts describing how this can be sort-of-handrolled with nitrokey and gpg key for x509 cert:
https://vtimofeenko.com/posts/unlocking-luks2-with-x509-nitr...
It's almost certainly a good time to update https://lwn.net/Articles/736231/
Without that, none of the other stuff would be trustable.
- It is supported by an open-source FPGA toolchain
- Has an in-package non-volatile configuration memory (NVCM) that is lockable. This is where we'll eventually keep the FPGA configuration bitstream, including the unique per device secret.
After some reverse-engineering work we're also able to program and lock NVCM with open tooling, as opposed to having to use Lattice's proprietary one.
Aren't SoloKeys [1] also open hardware and software? Or is the Tillitis key more general purpose and thus not in the same category?
If you want to power your key via NFC (tap to phone to authenticate), you need a micro which consumes very little, powers up quickly and can do a signature before the FIDO protocol times out. I'm not sure this is currently possible with a FPGA, but maybe it is.