In the coding framework I created for students we have a "real" typedef that is either for float or double, and with C99's <tgmath.h> you just write "cos" once, and it will turn into cosf or cos depending on how the typedef is set, which allows controlled experimentation on how FP precision affects performance. But for submitted code the grading scripts grep for "double" and turn on various extra warnings to ensure that there are no implicit casts from double to float, in an effort to ensure that single precision is always being used (but I should probably scan the assembly).
Steve Parker was the first person to explain to me (while he was still a student, and I was a much younger one) the sometimes surprising cost of having image sizes be powers of two (because of cache conflicts). Small world.