AES - Advanced Encryption Standard
CBC - Cipher Block Chaining
PKCS - Public Key Cryptography Standards
SHA - Secure Hashing Algorithm
MAC - Message Authentication Code
PBKDF - Password-Based Key Derivation Function
NIST - National Institute of Standards and Technology
FIPS - Federal Information Processing Standard
KDF - Key derivation function
CTR - Counter Mode
RSA - Rivest Shamir Adleman (last names of each creator of the RSA algorithm)
OAEP - Optimal Asymmetric Encryption Padding
PSS - Probabilistic Signature Scheme
ECDSA - Elliptic Curve Digital Signature Algorithm
PS3 - Playstation 3?
DH - Diffie-Hellman key exchange
ECDH - Elliptic curve Diffie-Hellman key exchange
TLS - Transport Layer SecurityAES - Advanced Encryption Standard
CBC - Cipher Block Chaining
PKCS - Public Key Cryptography Standards
SHA - Secure Hashing Algorithm
MAC - Message Authentication Code
PBKDF - Password-Based Key Derivation Function
AES - Advanced Encryption Standard
CBC - Cipher Block Chaining
PKCS - Public Key Cryptography Standards
SHA - Secure Hashing Algorithm
MAC - Message Authentication Code
PBKDF - Password-Based Key Derivation FunctionEdit: I just double-checked the help page and saw the note about code formatting. My apologies for overlooking that!
Take AES for example, "Advanced Encryption Standard" doesn't really mean anything. AES is a block cipher, also known as Rijndael. CTR and CBC are block cipher modes. RSA is a public key cryptosystem, etc. The same applies for most of these things in the list.
So if someone doesn't know what these things stand for, they're going to have to go to Wikipedia to check it out anyway, the words behind the acronym are almost as confusing as the acronyms themselves.
I get a cosmetics site, apple.com and store.apple.com, wikipedia article about the mac adress and a clothing site. Message Authentication Code is nowhere to be found.
Yes, I took one of the harder ones, AES is on the first position, but still it is nice to know the words behind an acronym.
Even for technical documents, if you expect your target audience to be familiar with the domain, nobody remembers every abbreviation every time. It also helps new readers to quickly familiarize themselves with the material (and might even help a few understand more than they would have done without expanded abbreviations).
- for the readers who already know the abbreviation, it's a waste of space and time
- for readers who don't, the words won't tell them much either - Transport Layer Security tells you about nothing about what TLS really is - besides, for those who are really interested, they can always look it up on Wikipedia (that way, they will actually understand it).I would add to the people commentating here on HN: tptacek's review is tough; you do not need to lay into the author of this book any more.
Here's a readable version: https://gist.github.com/mikemaccana/10847077
When working through basic knowledge to mastery of a topic, attempting to teach someone else is an extremely effective way to organize your thoughts and learn yourself. This is why graduate students teach undergraduate students.
The author of the original book shouldn't feel shame for making the mistake of working toward mastery of the topic. But racing to publish is dangerous when the topic is as serious as (heart surgery or) cryptography. A stern warning is worth repeating.
Most of us already read the review anyway.
while the factual content of tptacek's review may be spot on, his overall tone is very negative and smacks of "only experts allowed" logic. while he could have easily helped improve kyle's book and shared these comments privately, he instead chose to lambast kyle publicly, which doesn't really help anybody: tptacek looks like a total jerk and kyle now has a lot of negative attention on (this version of) his book.
this pervasive "experts only" attitude is a big part of why "secure" open source projects have hard times getting and keeping contributors. it is par for the course for people to be super rude and negative to new participants instead of trying to encourage them to improve and learn. this lack of contributors then has a whole array of negative secondary effects, like less people reading the code for the project.
If the author instead put together a book on how a layperson could perform open-heart surgery, you're damn right that actual surgeons would jump all over it.
There is some strange pervasive attitude/arrogance in tech that all it takes to be good at something is to be smart and give it a try. Why learn the theory/fundamentals when you can just start coding?
For building a web app, sure. But security is not one of those things. You actually need to learn the fundamentals and theory, and even then, need lots of experience.
Maybe the tone could have been a little softer, but this should not have been done privately. The criticism of the work needs to be just as public as the work itself, so that people who might have been misled have a chance to see why.
I really truly cannot understand the critique of an "experts only" attitude when it comes to technical books that make important recommendations for building critical systems. By all means, non-experts should experiment and build and learn. But non-experts definitely should not be giving out large quantities of advice in an authoritative tone.
It helps people who might have read the book and learned to do things the wrong way.
We can model this as "Kyle has disseminated harmful material, and tptacek is trying to contain the damage". Kyle's feelings, intentions, and hard work aren't irrelevant; but they're not what we should be focusing on.
Publishing a book like this sends a strong public signal of deep expertise.
I have not found tptacek to be overly rude or negative when offering advice to journeyman cryptologists. But a journeyperson should not necessarily be publicizing their how-to guides yet.
Academic researchers get these kinds of critiques of their publications all the time. It's extremely useful to the whole academic process despite being infuriating and depressing. That said, most of those critiques happen before publication and in private. But as a book author, that's something one can control. If I were writing a book like this, my #1 worry would be that I was making claims or errors that would be held up on HN by folks like tptacek as evidence of my incompetence. I would therefore made it the highest priority to approach the most likely people to have an opinion to get them to review my draft ahead of publication. That's what people writing serious publications that have real world consequences do. Make no mistake: crypto is in this category. It's not like writing "The 4-hour Work Week", "Web Design for Programmers", or "JavaScript for Aspiring Ninjas".
Here, your attitude causes two problems.
First, you know and apparently like Kyle Isom, and so I presume you're also ready to tell me that he's an adult and a professional. Professionals do one of three things with criticism: ignore it, rebut it, or learn from it. My assumption has been that Kyle is choosing options (1) and (3) from that list. But here you are, inventing option (4): "get indignant about it". I wonder if you've thought about the extent to which people will attribute that response not to you, but to Isom.
Second, whatever you might think about the tone of my feedback, it's clear that Isom needs additional technical review for his book. Whipping up a totally unproductive us-versus-them narrative about "jerks" versus "open source" does the opposite: it generates drama. Even if you think my review was itself dramatic, piling more drama on doesn't make Isom's work more attractive to experts.
I'm not sure how big of a deal either of these issues are, but they're a bad habit for message board denizens. The exact same thing happened to Willem when he wrote his critique of the Akamai allocator, and Hacker News had a totally unproductive drama storm for a couple hours before Akamai (a) thanked Willem and (b) acknowledged that he was absolutely correct. Read the Akamai comments on the HN thread, and apply them here, substituting "Kyle Isom" for "Akamai", and I think you'll see that they apply.
Finally, I'll admit to being personally irritated by the claim that I operate from "experts only" logic with regards to cryptography. There are at last count something like twelve thousand people who have reached out to us for our free crypto challenges, and thousands of those people have gone on to solve multiple sets of challenges (something like 60 people have finished the first 6). Every damn one of those people is an email exchange that me, Sean, or Marcin had to have directly, on our own time, with no compensation --- the opposite of compensation, in fact, because we donate to charity when people finish them.
There are a lot of people on the Internet to whom you could direct the "experts only elitism" criticism regarding crypto. I am not one of them.
What's more annoying about that bogus critique is how it muddles a real issue. I'd like many more people to understand crypto and, particularly, what goes wrong when it's implemented naively. But I'd like far fewer people to plow ahead and implement their own broken stuff. The track record on amateur cryptography is bad, and what developers don't like to acknowledge is that the badness that work generates is an externality to them. People have in the real world been hurt, physically, because of broken amateur crypto. It is hard for me to take the hurt feelings of developers all that seriously by comparison.
Sometimes expertise is actually required.
Exactly
Not to mention the need to have to filter through all the BS criticism. I've read people arguing that there was no issue in having the e in RSA (the public exponent) equals to 1. Really.
I've been throwing $20 bills at my monitor so that your book will start downloading, but it doesn't seem to be working.
But really, you should write one.
That said, you first point seemed silly. Simplified, partial and building block examples are used in almost all fields to facilitate teaching (including among medical students and surgeons). They are useful because they keep people moving along the process, teaching them terms, skills and concepts they will need to get to the next step in the process.
What is your alternative method for teaching someone unfamiliar with these concepts in a way that won't just put them out to sea without a paddle?
I would also divide the book into two parts, the "easy" part and the "hard" part. The "easy" part would get readers to the point where they can safely use TLS, reliably PGP-encrypt something, hash a password, and invoke NaCl (which is part of the go.crypto package). I would probably spend a whole chapter on how to use Golang's TLS library, for instance. Most readers that are picking the book up so they can solve some business problem would probably never need to get past the "easy" part, and I would encourage them not to.
I would remove from the "hard" half of the book protocols that were insecure. An unauthenticated DH exchange is a poor basis for a cryptographic transport. Slash, cut, gone. A naive password challenge-response protocol doesn't solve anyone's business problems. Slice, snip, gone. In their place, I'd probably add more discussion of key exchange algorithms, with particular attention paid to how easy they are to get wrong.
1. Top researchers come up with algorithms and techniques
- The research corpus reviews them
2. Top programmers implements these techniques - The programmers communities review them
3. Top engineers write books to explain these techniques - which everybody else relies on in their tools
1 knows more than 2 which knows more than 3. But each group needs the two others, and the rest of the worlds needs all of them. People who write books are rarely the same people who come up with cryptographic breakthrough. Instead, they are engineers, and they can use a bit of help to get things right.Your review was harsh, because you know more. What, I think, was missing from it is a bit of "This is a great first step, let me help you make it better so we can move everybody else forward. Here are my comments."
In keeping with the philosophy of being hands on; would the book be improved if after introducing the code in the first section, have the reader implement an exploit for the same code?
I do not feel remotely comfortable with the idea of writing a book containing prescriptions on how to design a cryptosystem. We're wary of doing that even for our clients, where we know all of the context and the threat model that the proposed system would face, and who will actually build it, and that we'll get paid to review the resulting implementation. I don't know how to solve that problem for strangers.
Other people do. They are much better than I am. When Trevor and Moxie write the book on why they chose the TextSecure primitives that they chose, I'll be first in line to buy.
Someone who picked up the basics from a few Wikipedia articles here, a few papers there, a couple open source projects here and there... they're smart, so they're not completely clueless about the field, but they just don't have the experience to see where they fall short, the industry know-how, and so on.
I feel like instances of this in the tech community are not too rare, and it's a consequence of the internet: anyone can publish a book and distribute it all over the world now. It's worth keeping in mind that while harm is being done through the spread of false information, what's most important is to educate them, see this as a teachable moment, so they can become productive experts and modify their message to be fully correct. Of course, it requires them to be open minded of their shortcomings: but it can be done.
PS: I have no clue who the author of Practical Cryptography With Go is.
(I don't know the author either)
I would not write a book on structural engineering to learn the subject or become an expert. The stakes for the misinformation being spread are high.
Unfortunately, not even widely used, highly trusted implementations work right all the time. A out-of-bounds memory bug introduced by an insufficiently vetted commit opened up a serious flaw in OpenSSL. On a much, much smaller scale, I once had the misfortune of working with an old version of Microchip's PIC18 AES library, which had some serious issues that made it nonfunctional for anything more complex than the toy sample app it shipped with. But with enough exposure these problems are eventually exposed and fixed. Would a world where everyone rolled their own bespoke, ad-hoc SSL implementations be more secure? I doubt it.
In the end, I think there needs to be a cultural shift. People shouldn't be discouraged from building their own crypto for fun and learning, but they should be discouraged from deploying it for any application where real security is required - at least not before undergoing rigorous analysis. One of the first things Dan Boneh teaches in his Crypto I class is that you should think very long and hard before implementing your own cryptosystems (i.e. don't do it), because getting it right is hard, and getting it even the slightest bit wrong tends to make it useless. And when you consider that people's livelihoods (their personal information, their money) and even lives might be jeopardized, taking responsibility as an engineer becomes of paramount importance. Crypto just doesn't lend itself to a "build an MVP, get it working, move fast and break things" mindset.
Just on example I've found: https://www.nae.edu/Publications/Bridge/Terrorism/AnEngineer...
"The second relates to the design of structures. It is time for engineers and architects to get together to devise new structural forms that offer a higher degree of protection not only against terrorist attack, but also against other hazards. There is much to be learned from what happened in Nairobi and Dar es Salaam, in Oklahoma City, and at the World Trade Center. Similarly, retrofitting of existing structures needs to be studied systematically, as it can reduce, at modest or virtually no cost, the potential for damage."
For example, I'd call OpenSSL documentation "criminally bad" to the point where it actively tries to coax the user into making a mistake - there is no central point of good practices for common use cases (e.g. sending a AES256-CBC encrypted block of data securely with shared key, using pub/private key, etc.) and the docs regulary don't mention the proper way of initializing different datastructures and modules (like RNGs) when they're mentioned. Even finding out how to properly initialize everyting to do a standard PBKDF#2 password derivation is a huge chore that requires reading through tons of badly formatted documents (or copying a random piece of code from StackOverflow which may or may not be secure). That makes even highly secured libraries a minefield where you can easily introduce huge security flaws without even knowing that you did so.
And as long as the excuse for that is "well, you just need few years of studying all crypto background" we'll be seeing security breaches everywhere - developers mostly just aren't prepared to spend so much time learning the crypto field or spend alot of money on security experts. In the big picture that's a huge issue - too many devs just opt to copy random pieces of code from StackOverflow, which can have very glaring security flaws or are simply not secure for the devs usecase.
Couldn't agree more. The problem is that for any bridge that gets used, every structural engineer signing off is going to have been educated and experienced to the extent that they are Chartered (or equivalent), the plans for the bridge have to be approved by planning authorities and a thorough documentation and review process has been gone through, before the first drop of concrete has been poured. And once its up, it's tested, and inspected regularly. As every engineer on the project has necessarily been educated they know how hard it is and what the pitfalls are.
Bad crypto results from the fact that there is no need for any of the above requirements to be met. Any old engineer might think they can produce a good implementation, design it, put it into production and have systems secured by it, without being aware of the potential problems they're causing.
The free world of the internet is a wonderful thing, but regulation isn't entirely a bad thing either.
I disagree with you completely and absolutely. Your bridge in Boston isn't going to collapse the moment a researcher sitting in his bathtub in Tel Aviv has a eureka moment.
But if that eureka moment results in a preimage collision in a secure hashing algorithm, that hashing algorithm is broken for everyone all over the world forever. (Practically, as soon as the collision is public knowledge). Cryptographers have to actively seek out this information.
That's why MD5 is deprecated, even though the weakness is weaker than the one I've just stated. (It's just a chosen prefix collision that can be done today - a preimage [chosen hash] attack still takes nearly the full search space.)
Security is applied mathematics, the way engineering is applied physics. But the laws of physics don't change on an annual basis, or the way in which they change is too low-level to apply to engineering, whereas the laws of applied mathematics do.
Systems administrators have to keep up to date on an even more active basis, in some cases needing to patch any system within 48 hours of a public disclosure.
So I simply disagree that as an engineering endeavor implementation of cryptosystems is in any way similar to any other form of engineering.
In fact, for the particular example I used in the above case (a hash), the very existence of the operation is an open problem. ("The existence of such one-way functions is still an open conjecture", Wikipedia.)
What other branch of engineering relies on laws that may well be false?
But let's not kid ourselves, reimplementing a library the size of OpenSSL in a new language, or even the same one, is not a trivial matter. We're talking about a $10m+ investment, who's going to pay?
> there is concern that the NIST curves are backdoored and should be disfavored and replaced with Curve25519 and curves of similar construction.
Of course, "there is concern" is pretty vague, but it should be made clear that such concerns are in the realm of pure speculation at this point. There is simply no known way of constructing a "backdoored" elliptic curve of prime order over a prime field (in particular, the closest thing resembling such a backdoor, namely Teske's key escrow technique based on isogenies from GHS-weak curves, cannot work over a prime field). Scientifically speaking, I don't see more reasons to believe the assertion that "NIST parameters are backdoored because they aren't rigid" than the (equally unfounded) speculation that "Curve25519 may be weak because it has small parameters/a special base field/composite order/etc.".
Moreover, to say that the NSA has backdoored the NIST curve parameters is to assume that they have known, for quite a long time now, a serious weakness affecting a significant fraction of all elliptic curves of prime order over a given base field that has so far escaped the scrutiny of all mathematicians and cryptographers not working for a TLA. Being leaps and bounds ahead of the academic community in an advanced, pure mathematical subject doesn't quite align with what we know about NSA capabilities.
Don't take this the wrong way: there are good reasons to favor Curve25519 and other implementation-friendly elliptic curves (namely, they are faster, and they are fewer ways of shooting yourself in the foot if you implement them), but "NIST curves are backdoored" is not a very serious one.
The issue with the NIST P- curves is that there's no good reason to trust them. And, for what it's worth, being ahead of academia on pure math isn't science fiction; NSA employs a lot of mathematicians. But the notion of a backdoor in the NIST curves is totally speculative.
Here's what I was trying to capture:
http://www.hyperelliptic.org/tanja/vortraege/20130531.pdf
Despite its very weird submission as a story to HN, what you'd been reading was just a very long HN comment; I wrote it in a single draft and in the style I would use when writing a comment.
[1] https://groups.google.com/forum/#!msg/sci.crypt/mFMukSsORmI/...
And, if the criticisms can be addressed, in both specifics and perspective, for a future edition, they'll have a hardened book... almost sure to earn another updated expert review ("is it fixed?") at that time.
> In considering RSA, the book recommends /dev/random, despite having previously advised readers to avoid /dev/random in favor of /dev/urandom. The book was right the first time.
From "man 4 urandom":
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver.
In fact, using /dev/urandom is one of the causes of the creation of weak ssh key, found in this research: https://factorable.net/
So: Why is /dev/urandom the correct choice over /dev/random ?
Anyhow, its conclusions seem to be mistaken to me:
> It’s also a bug in the Linux kernel. But it’s also easily fixed in userland: at boot, seed urandom explicitly. Most Linux distributions have done this for a long time.
If you're an application developer (of something that runs very early in the boot process) but you're not making your own distro, and you can't trust your distro (I guess that since a lot of factorable keys existed, "Most Linux distributions have done this" might not actually hold true or count to a good enough percentage) you don't really have anything else that you can rely on to seed /dev/urandom explicitly
I'd think that the correct approach is to use urandom on everything but linux (after all, as long as your application isn't a blocker for the boot of the system, it doesn't seem terrible to wait for /dev/random)
Also, reading and blocking from /dev/random seems akin to failing early and explicitly (in the case where blocking on read is actually a problem), while reading urandom when not initialized seem to be a silent failure.
But I'm not going to write software that has to read from either device anytime soon, so don't panic if I'm mistaken :)
When did this community become more concerned with tone than correctness? The top of this thread is filled with people saying that the tone is bad, it's unproductive, it's unnecessary, etc. Yet nobody seems concerned about the published book filled with bad information that a lot of people are going to "learn" from. What gives?
There's been a pretty strong concern about tone on HN from the early days, mainly (afaict) driven by Paul Graham having an interest in and repeatedly commenting about it. It's not the only concern, but avoidance of flaming and mean-spirited comments, in addition to avoidance of vapid or dumb comments, is one of the openly and repeatedly stated design goals of the community. I.e. it should be intelligent discussion, conducted in a collegial tone.
(This is a general comment on whether tone is and/or should be important on HN, not an evaluation of tptacek's review or implication that this comment/gist in particular would fall afoul of the intended HN standards.)
The problem I have is that people are freaking out over extremely minor matter of tone while ignoring important technical problems. Worrying about tone is fine. Worrying about it to such an extreme degree in preference to other things is not.
Being right used to be the ultimate trump over social dynamics, which is what made tech a breath of fresh air to so many. Now that the field has become socially popular, it's been mired in the same vapid talking heads as everywhere else. And the people who actually know things are much quieter, as they generally have better things to do than compete for airtime.
I read Schneier's & Ferguson's Practical Cryptography years ago, the only thing I remember about it is the "don't try this at home" message.
I cannot take anyone who advocated MAC-then-Encrypt, in 2010, seriously (in the book Cryptography Engineering: Design Principles and Practical Applications by Niels Ferguson, Bruce Schneier, Tadayoshi Kohno).
The school of cryptography they subscribe to seems to be "crypto is black magic; this is tried and it works and it is pretty much secure because I feel it is secure; experience is everything; proofs can have bugs too" as opposed to a more principled, analytical, methodical, provable security.
This is especially problematic in pedagogical contexts, because the learners, by definition, do not have much experience or calibrated feelings, so they'll be lost or have to copy the design decisions of the authors without taking into account the contexts or that they might be flat wrong. That approach indeed implies the natural advice to someone who wants to learn will be "don't try it at home".
Was rather impressed...
Can I ask why? What is so dangerous with asymmetric crypto compared to symmetric crypto?
Another big reason is that public-key algorithms are parameterized to an extent symmetric systems aren't. Two random numbers is all you need to safely encrypt something with AES. Diffie Hellman, the simplest of the public key algorithms, needs a prime, a generator, and random private keys with particular relationships to those parameters.
Even though it's theoretical, the side effects of this fact surface from time to time as engineering issues in asymmetric crypto: all information that the attacker might need to break asymmetric crypto is more or less in the ciphertext, intuitively suggesting it's easier for asymmetric crypto to catastrophically go wrong.
It's great that one-time pad exists, but it's not really relevant in actual crypto code, right?
The only actual reason I can think of is that symmetric crypto is easier to write and understand - you just mangle and xor some text back and forth, while in asymmetric crypto, you need to understand fairly complex algebra. But again, that's not that important if you use existing primitives, right?
Not as big an issue with ECC, but RSA also has much larger block sizes, increasing the size of small payloads.
It's been my experience that Asymmetric is used for kex (key exchange) or key agreement or signing, but encryption is done using a symmetric algorithm.
Or you can browse his blog, Practical Cryptography or Cryptography Engineering may be a good start.
* This book, I am not making this up, contains the string: "“We can use ASN.1 to make the format easier to parse".
Last time I had something to do with ASN.1 was years ago but it seemed to work well, libraries were full featured and cross-language interop was ok. What am I missing that makes ASN.1 bad ?Or is the critique to an attempt to write a custom ASN.1 serializer/parser?
Didn't you know? If a web developer out of high school cannot read a format at first glance, it's obviously over-engineered and useless and anyhow everybody should always use JSON anyways.
a) it provides readers with a laundry list of things to go study independently
b) the book author can, given time and inclination, do the same study and improve the book
I want to go through every single chapter and rewrite it to stave off the imaginary critics in my head who will undoubtedly tear it apart.
While criticism is good, the condescending way it is presented, as well as being overly critical are bad. Example:
"Total undue reverence for NIST and FIPS standards; for instance, the book recommends PBKDF2 over bcrypt and scrypt (amusingly: the book actually recommends against scrypt, which is too new for it) because it's standardized."
I know people love scrypt and bcrypt, and have been proven safe so far, but there are advantages to use standardized methods. An implementation can make something less safe than the standard.
bcrypt is also approximately the same age as PBKDF2.
And, finally, standardization is a very poor substitute for security analysis. PKCS1v1.5 is also a standard. If you want to argue against bcrypt, you'll have to marshal actual arguments.
I'm arguing that there may be advantages in using standardized methods (for interoperability) and especially implementations.
Which one is safer, using PBKDF2 from a known implementation or the "bcrypt library" for Ruby/Node that someone just posted to github? Oh what do you mean that's not how you read secure random numbers?
Is the audience for this review intended to be cryptography experts, who would not read such a book except to praise or trash it? If that's the case, it seems rather mean spirited. More "wow check this loser out" than "I don't recommend this book to beginners or anyone and here's why."