It's been almost five years since Magic Wormhole first released, and about half a year since that popular Latacora post recommended it for transferring files, and said "Someone stick a Windows installer on a Go or Rust implementation of Magic Wormhole right away". Guess what you're still not going to find a reliable Windows build (let alone a GUI) for? Yep that's right. Despite the fact that most of these projects come from a felt need for better alternatives to PGP for the average user, very few of them have actually come up with a product that's more accessible to the average person.
I read this and then went and ported wormhole to Go: https://github.com/psanford/wormhole-william. There's no Windows installer but it's pure Go so building on Windows or cross compiling for Windows is easy. Besides Windows I also want to support iOS and Android (I have a very rough working react native frontend right now).
It's roughly the same reasoning as for your Windows GUI argument. This tool is now very suitable for people who understand what it does, but it is not yet well adjusted for users who lack that understanding.
Today - when most Magic Wormhole users can probably explain what a PAKE is - if you attack a Magic Wormhole transfer and cause errors (by guessing wrong) those users will react by increasing the length of the Wormhole code. But if we popularize it without fixing this default, do you think my sister knows to do that?
It should be trivial to increase security on failed attempts or use a higher default security for an GUI frontend.
The CLI is clearly meant for somewhat technical versatile users (I mean it's a CLI) so I think it's normal to do some aprons when targeting other user groups. E.g. adding explanations over some aspects atonal to the thinks I already mentioned is quite doable for a GUI.
Since PGP has almost no serious real-world adoption (search your feelings; you know it to be true), it's wide open for replacement. People should use `wormhole` for file transfer in preference to `age`-encrypted files, if the only reason they're encrypting is to get the file safely across the wires.
Totally agree there, but I'll remain skeptical until I actually see that adoption start to happen. Certainly it's not going to until there's a nice GUI. (It's kind of sad, actually. Wormhole has such a nice TUI that would be utterly trivial to wrap in a simple QT interface or something.)
Checks...it's not true. Maybe the original email use case never caught on, but that's not the only one. For example, PGP is a standard way to transfer Visa, MasterCard, or Diner's Club credit card transaction files. We have thousands if not tens of thousands of entities transferring PGP encrypted files every day, and we get new requests for PGP enablement on a regular basis. This is a deeply embedded business process (even embedded in many corporate financial systems like Oracle Financials), and it's not going away any time soon.
Other use cases...yeah, PGP should go away.
I also thought I've used a gui dat protocol client to transfer files but maybe it was only in the terminal.
Would you consider an Electron app adequate? ;)
I made croc because it was really hard to install wormhole on Windows (especially for my non-dev friends). Also I wanted croc to support resuming transfers which has been stalled in wormhole for awhile now. [1]
The case where I still use PGP is receiving reports of bugs from unaffiliated researchers, and I should replace it with a form on an HTTPS web site.
There are applications where the extra time and space of something like ed448 present uncomfortable trade-offs.
File encryption is not generally one of those applications.
So I find this a little disappointing.
But I suppose that NIST PQ will finalize in the not far future and this will get replaced by something that hybridizes with a PQ scheme. (I say replace because the expectation that a pubkey is something you can easily copy and paste doesn't really work with the PQ schemes you'd likely use with file encryption.)
What happens if auth fails part way through the file? Do you get a truncated decryption on stdout? -- or is this buffering the whole input in memory?
FWIW, if the idea there is that you'll be able to send encrypted reports to github users based on their ssh keys... that might not work so well in the long run esp for security conscious projects, since good practice would have their github ssh key living in a keyfob that won't decrypt messages for them. :)
Recipient types are the one parameterized thing in the spec, so if we need to switch to Ed448 or a PQ hybrid at some point we absolutely can, without even bumping the version.
This is problematic since a caller needs to be aware of the need to appropriately handle truncated plaintext output. The readme needs to warn about this pitfall.
Since when does anyone care about NSA's opinion (who also don't have to care about FIPS compliance)?
Yet they are still fine with AES-128, even though it is objectively a weaker link in the chain. See https://blog.cr.yp.to/20151120-batchattacks.html
1. How does age disambiguate between filenames and other key formats for the -r argument? (Those formats are also valid filenames)
2. Does the header use normal Base64 (i.e. +/) or url-safe Base64 (i.e. -_). The specification sounds like normal Base64, but some lines of the example contain -_ others contain +.
3. What characters are allowed in the header? ASCII only? (the current key-formats are ASCII only, but an implementation is supposed to skip unknown formats)
4. Are any characters forbidden in recipient types, arguments and additional lines?
5. Which strings at the beginning of a header line have special meaning and thus are illegal for additional lines? Only `-> ` and `--- `? I assume the space is mandatory in those strings despite the spec not mentioning that for `->`?
6. CRLF normalization of the header is only mentioned in the section about ascii-armored files. I assume it also applies to non ASCII armored files?
7. Is keeping the public key secret to achieve symmetric authenticated encryption an officially supported/recommended use-case?
(If the public key is public, the MACs block decryption oracles. However they don't provide any authentication, because the message isn't bound to any sender and thus an attacker can just encrypt their own message to your public key. If the receiver's public key is secret, this isn't possible and thus the current implementation provides symmetric sender authentication)
8. How does the command line tool signal failure/truncation/corruption?
1. rage tests arguments for validity as filepaths, and uses the file preferentially over treating the argument itself as a recipient format.
2. The header uses normal Base64. This was changed recently, and the examples likely need updating.
3. rage currently rejects unknown formats; I haven't implemented this part of the spec yet.
4. Based on the current contents of the age specification, it looks like limiting to standard Base64 characters is consistent.
5. Additional lines all need to be standard Base64 characters (i.e. consistent with the format of current recipient lines) if implementations are going to be able to skip unknown formats.
(Recipient lines are currently under-specified in the spec. I opened https://github.com/FiloSottile/age/issues/9 a while back for addressing this.)
6. The normalization notes are an artifact of an earlier ASCII armoring format. Now that the armor is (a strict subset of) PEM, there is no need for CRLF normalization, as the age format solely uses LF, and PEM (which can tolerate either) is only a wrapper around the age format and thus does not affect the header.
8. rage signals this via an I/O error in the library that will bubble up through std::io::copy; this amounts to truncation on a chunk boundary and a non-zero exit value.
For example, it seems that if you use scrypt then you get fully authenticated encryption: the message must have come from somebody who knows the password (either a trusted user or you chose a weak password). But if you use X25519 then the scheme used is ECIES, so no sender authentication, only IND-CCA security.
The format document says that if you want “signing“ then use minisign/signify, but I suspect most people want origin authentication. We know that it is actually quite hard to obtain public key authenticated encryption [1] from generic composition of signatures and encryption, with many subtle details. It would be better if age supported this directly for X25519 as it does for scrypt. Unfortunately, you can’t simply use a static key pair to achieve this (as in nacl’s box) as age uses a zero nonce to encrypt the file key with chacha20-poly1305 so reusing a static key will result in nonce reuse. (This seems a bit fragile).
decrypt file | tar xz
Elsewhere in these comments somebody also mentioned the case of decrypt file | sh
Presumably the whole point of implementing the STREAM online AEAD mode is to support these kinds of cases; only releasing chunks of plaintext after verification.But these use-cases are only secure in age when using the scrypt decryption option or if you have first verified a signature over the entire age-encrypted archive (killing the streaming use-case). The reason is that the X25519 age variant provides no sender authentication at all, and so an attacker doesn’t need to tamper with the archive: they can just generate their own ephemeral key pair and replace the entire thing with data of their choosing. Age has no way of detecting such an attack.
You absolutely need origin/sender authentication built directly into the tool to handle these cases securely.
[1]: https://www.imperialviolet.org/2014/06/27/streamingencryptio...
That's just not good enough. It was fine in early drafts because there was hope they'd remember that "Solve all of the world's problems" was not their goal, and so SSH keys might be irrelevant in later revisions anyway. It's not fine in something intended to actually ship.
Either get somebody to put lots of work in to verify that yes, it's definitely safe to do this as SSH stands today, and contact SecSH WG or Sec Dispatch or whoever to make sure they know you're doing this now - or, as seems much more likely, rip out all the SSH key code and highlight that line about how you don't want to do key distribution in age because it's hard.
PGP is full of things its creators thought might be safe that you now have to tell people not to do because it turns out they're unsafe. This tool should not recapitulate their mistake.
I just added a pull-request to allow the recipients flag to also be specified as a https:// or file:// URL - this is mostly useful to use the GitHub <user>.keys endpoint to grab user keys eg.
./age -a -r https://github.com/<user>.keys < secret
will encrypt using <user>'s GitHub SSH public keys.With that in mind, it's still really exciting. I can't wait until I never have to use GPG ever, ever again.
If those keystores are not being regularly updated by trusted data vendors, how am I supposed to trust Gpg signed stuff? It isn't like SHA where I just need to compare 2 hashes.
I'd shift to command line tools if I knew that the protocol was being widely used effectively.
More on this: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html
I assume that OP's question implied that there generally are downsides to using separate tools (such as fragmentation, and then mostly UX ones: obtaining/installing them on all the machines that need them, managing keys differently, learning/using additional software, etc) when a task can be achieved with commonly available ones. But then the article criticizes GnuPG's UX, and suggests to use a bunch of different tools.
Then the article says "let's call both GnuPG and OpenPGP `PGP`", and proceeds to criticizing "PGP" standing for both GnuPG and OpenPGP.
Then it criticizes OpenPGP metadata leaks (possible attachment of a key to an identity), but suggests to use services such as Signal and WhatsApp (certain attachment of a key to an identity via a phone number, AFAIK). Or the ones using similar algorithms (I've only tried OMEMO out of those myself, which led to messages not even being shown in IM clients, apparently due to implementation inconsistencies).
Then it goes on suggesting to not encrypt email. I guess it's implied that one shouldn't use email for secret data, but a much more common practice seems to be actually using it for secret (but not "life and death" kind) data, and sending plaintext passwords and such; using PGP would still be a step forward. Perhaps it's the contrast between such criticizm (both here and of various other technologies) and common practices that makes me rather skeptical about the former: we can do better than X, but not doing even X.
WOT/PKI criticizm is present there too, but the suggested software either doesn't do/need it at all, or relies on a safe channel and direct verification (which is usable with OpenPGP as well).
I'm not advocating use of OpenPGP for everything, but finding those arguments to be rather strange.
What would be the best way to encrypt something with a lot of files in it (like, say, a home directory), assuming you wanted to access it across the network on multiple devices?
Sorry if this question's annoying, it seems like something you might get a lot.
> A Swiss Army knife does a bunch of things, all of them poorly.
Counterexample: the Phillips head screwdriver in my Swiss Army knife is actually the best Phillips head I've ever found. It can easily turn without slipping a wider range of screw head diameters and depths than any other screwdriver I've used.
(Does anyone else have way more screwdrivers around than they can explain? I cannot think of any reason I would own more than two or three full sized screw drivers, and one set of small of jewelers screwdrivers...but I've got more than a dozen full sized ones and a couple sets of jewelers screwdrivers. I cannot remember buying, inheriting, finding, stealing, borrowing and not returning, or being gifted any of them--but there they are. Glitch in the matrix?)
https://www.imperialviolet.org/2014/06/27/streamingencryptio...
I also don’t understand the “anything to do with email” line. Sending my public key to a recipient on an out-of-band channel and then sending an encrypted email should be completely agnostic to the underlying encryption tools, no?
I don’t mean to sound critical - I’m very intrigued by this project and would love to have a better replacement for gpg!
It solves the problem of having to use the bloated monstrosity that gpg has become.
Its super weird. There is this use case to de/encrypt a single file, but mass storage of files in a secure way and without a proprietary protocol seems impossible.
Pretty sure that this leaks a lot of metadata.
I've been waiting for a worthy replacement for "crypt" for a very long time, and gpg, while it can be coaxed into doing that with much effort has simply become a bloated abomination at this point.
Hope this gets vetted by the crypto community and gains popularity.
And what are the expert opinions on themis: https://github.com/cossacklabs/themis ?
To give an example, I was in a work situation more than once where an external party wanted to transfer files to or from our company and I was suppose to help find a standard tool/method. The only (and I mean only) way right now is pgp due to it's ubiquity with s/mime on email being second. We do need great tools like this but we need them to where if I can't use it due to license,policy,etc... Issues i can use a separate compatible tool.
So, My only suggestion to the author is to please make a fixed and versioned standard out of the scheme.
The point of `age` is that when you subtract out all these use cases from PGP and leave just the file encryption problem, PGP still sucks, and sucks way out of proportion to how complicated file encryption is.
So instead of bringing all of PGP's bloat, 1990s cryptography, and misfeatures to bear on that simple problem, we just get a simple, modern tool optimized for that one problem.
For clarity: Is this an endorsement of `age`?
I found that when working with non-techies that 7Zip is an acceptable encryption tool. It used proper encryption, it's open source, available on all platforms, available with GUI and CLI.
> The web of trust, or key distribution really
Is there anything in the tptacek suite of replacement tools for this? Like Keybase but fully open source and/or decentralized?
Luckily for me, vi defaulted to Enigma encryption back then…
The classical gpg based tools have this very same problem. The classical response is to suggest ramdisk usage, ideally for the entire OS (like a live system basically) to avoid getting artifacts onto the disk like clipboard history, cached thumbnails, or log files. pass for example uses such ramdisks. I disagree that this is a good solution though. Of course it is more thorough, but it requires additional intervention/setup, and not everyone has the needed expertise. Instead, I think the encryption tool itself should take care to only store the decrypted content in non-paged RAM, and give users read/write access through a GUI or a TUI. It should be a ready downloadable solution, similar to the TOR browser bundle. The TOR browser is also trying to not put anything onto the hard disk.
However, if your disk is exposed -- lots of other things, including shell history, swap, etc. may give you away.
The problem with stuff like "gives users read/write access" -- is that it presumes a narrow use case. What if you're encrypting digital audio? Source code? etc.
Should it also turn on your mic and try to determine if you're in a room alone? :P Demand you use an anti-tempest font?
There is only so much a tool can do. It's important that the tool does what it can within the context that it'll be used, but beyond that the best it can do is be clear about it's limitations.
Good that you point out shell history. You probably mean stuff like secrets being passed to age via CLI params? That's quite dangerous even if you put a space before the command which excludes it from your shell history. Any user on your system has read-access to the CLI arguments of every other process on your system. I've filed an issue upstream about this: https://github.com/FiloSottile/age/issues/37
I've mentioned swap in my comment. It's a problem indeed. On Linux you can prevent memory regions from being swapped out via mlock, but only to a certain limit if you are unprivileged (limit on my machine is 64 MB it seems). Windows seems to have a way as well. It's solvable in general and RAM is cheap. It's OSs that have to catch up. Even with the looming swap danger, your data is still more safe in RAM as it doesn't neccessarily get swapped while if it's on your hdd/ssd, it is almost certain to actually land there as well (instead of living in the RAM's cache).
> The problem with stuff like "gives users read/write access" -- is that it presumes a narrow use case. What if you're encrypting digital audio? Source code? etc.
That's a good point. Due to the point you made above (swap, shell history, etc leaking data to disk), it would be best if specialized tools handled the file, which are vetted to not leak any data onto the disk. You could think of a model where age is a library, the tools manually vetted with that in mind. Or you could think of a model where age is embedded into a runtime and the tools are sandboxed wasm modules without access to anything but RAM. Admittedly this is a huge project and one shouldn't expect age to be such a runtime.
A good stopgap would age enforcing best practices by checking whether the destination of the decrypted content is a ramdisk or not. I've filed an issue about this: https://github.com/FiloSottile/age/issues/36
BTW it's written Tor, not TOR
What's the actual use case, and why is it any better than plain stream encryption? If you wish to stream authenticated decrypted contents, it would mean 2 layers of chunking.
Security can only scale via one’s network, and if you don’t have any it can be hard to figure out what’s secure and what’s not!
FWIW a little googling and you can see that filippo is pretty well known in the security/crypto community for positive contributions, same goes on for tqbf who’s all over this thread endorsing the tool.
I would also trust the thing without looking at it, but I might take a look at the code someday to see what’s going on :)
https://stackoverflow.com/questions/16056135/how-to-use-open...
With that I'm guaranteed AES, a known-good encryption algorithm. I have no idea what these guys are doing without reading through their documentation. Hopefully they didn't roll their own.
The idea that "AES is enough" is like saying you don't need better winter clothes to go skiing because you have a good helmet. There's still more things left you need to protect than that! A secure block cipher mode and key management and IV generation, etc, is mandatory!