Compared to a lot of other code, where you can easily tell you're "done" because "it works", with crypto code you're only halfways there. Not only does it have to work, but it has to not break and not leak secrets, too. Anything from timing attacks to bad handling of padding, bad random generators, not to speak of buffer overflows and logic errors (goto fail, anyone?)
I'm thinking it would be prudent to at least use separate keys for anything interfacing with non-default implementations. Can't remember the details but wasn't there an issue where if any (gpg? ssl?) key had been used for signing on a certain flawed implementation, its secrets were spilled?
What's the state of the Go SSH library, has it been vetted by ... veterans? :)
https://github.com/golang/crypto/blob/master/curve25519/squa...
It's in assembler, with no comments. It's from Bernstein's code, via Supercop, and is not original to the Go team.
A recent effort to formally verify that code, out of Taiwan, Japan, and the Netherlands, found one bug not previously detected by testing.
http://delivery.acm.org/10.1145/2670000/2660370/p299-chen.pd...
This stuff is really hard to get right.
The reality is that most developers are not going to be able to spend significant amounts of time paying attention to the research and figuring out how it applies to their code - after all, in most cases, your users don't understand security, and you'll have the constant pressure to add features to your application instead, and honestly, security just isn't that fun for most people that they'd want to spend all their time on it.
> super skeptical about "second-hand" reimplementations
I'm just curious, how first-hand implementations are different? Obviously they could end up with vulnerable code just as easily and there's a ton of evidence they do.Because that is the first thing I would look into.
Thanks.
I am just asking (a bit fearful, yes), not simply ranting.