Most of my experience is with LLVM/Clang, so I can't say too much about how gcc differs in its thought model.
Modern compilers use SSA as the basis for common optimizations. In SSA form, every variable has exactly one definition (different writes originating on different control flow paths is represented with special phi constructs). Conversion to SSA form pretty much irrevocably destroys the original notion of variables. The standard big optimization passes will further destroy any easy mapping to the source code: code is pushed out of loops if possible, unexecutable control flow paths are removed, redundant computations (including both within statements and across the entire function) are eliminated, etc. This becomes particularly tricky in the backend, where register allocation means that some variables just don't exist in state anymore (because you needed that space for something else, and it's dead, so why keep it somewhere?).
What this means is that, when optimizing code, the maintenance of debugging information is very much a best-effort. If you disable optimization, you get something that is relatively akin to a very literal translation of C code to assembly. Even very basic optimizations, however, will almost immediately destroy the basic guarantees. -O1 (or -Og for gcc) will generally avoid the optimizations that do the truly insane manipulations, but you're still liable to get this issue.
The basic representation for debugging information on Linux and OS X is DWARF. DWARF is a nasty specification to read, and it doesn't insulate you from having to learn all of the C or C++ ABI implications. There is a facility to use DWARF to indicate variables that aren't located in the stack, but it doesn't look like compilers maintain debugging information well enough if variables are promoted to registers instead.