It also has lots of annoying warnings that would dissuade many people from running -Weverything by default.
Of course you’ll run into dozens of instances of it with old C code… and my experience has been that some of those instances are bugs similar to this one.
#include <stdio.h>
int main()
{
long long unsigned llu;
if (scanf("%llu", &llu) == EOF) {
printf("EOF!!\n");
}
printf("%llu\n", llu);
unsigned u = llu;
printf("%u\n", u);
return 0;
}
Here's an execution run: $ ./a.out
9876543210
9876543210 (llu)
1286608618 (u)
I tried to compile it as C and C++, with both Clang (14.0.0) and GCC (11.3.0): gcc -Wall -Wextra # no warning
g++ -Wall -Wextra # no warning
clang -Wall -Wextra # no warning
clang++ -Wall -Wextra # no warning
clang -Wall -Wextra -Weverything # loss of precision warning
clang++ -Wall -Wextra -Weverything # loss of precision warning
However, the warning goes away if there's an explicit cast: unsigned u = (unsigned)llu;
Worse, I still have no warning if I do the narrowing cast then affect it to a wider variable: long long unsigned u = (unsigned)llu;
printf("%llu (u)\n", u);
In C++ I'm warned about the old style cast of course, but using `static_cast` makes the warning go away. And of course the code overflows just like before.I don't have a good solution to this. Sometimes I do want to lose the precision. Bignum arithmetic for instance. In any case, I'm pretty sure the Keccak team's compiler did not issue any warning. Sorry if I implied otherwise.
I think it’s perfectly fine for an algorithm contest not to require a bugfree implementation… but I also feel that if we’re going to standardize on a cryptographic algorithm, throwing all the tools and formal methods we can at it during development of the reference implementation makes a lot of sense.