If they were integer variables, I guess the compiler would have done that, but you can't really do that with floats because i+A+A is not necessarily i+2*A. (Of course, in this particular example, the difference doesn't matter for the programmer, but the compiler doesn't know that!)
I think there's some gcc option that enables these "dangerous" optimizations. -ffast-math, or something like that?