Hmmm. I think your argument is roughly that there is a "sweet-spot" for abstraction when it comes to readability?
That is a point at which less abstraction makes the code harder to read, and more abstraction makes the code harder to read?
I will agree with this in specific cases (i.e. for any given solution, there is a point at which adding abstraction can't improve readability), but I'm not certain I agree in the general case (i.e. that using macros cannot improve readability).
I guess it also depends on what you mean by "really understand what's going on." I started out programming in C. Now in C, if you know your compiler well, you can predict fairly accurately what binary code will be generated when you compile with optimizations off. Moving to higher level-languages you lose this ability, and no longer "really understand what's going on."
For systems programming, I may still use C to get this advantage. For other problem domains, I sacrifice this knowledge because representing my problem more tersely in a higher level language makes the code more readable and easier to understand. Now I will never know exactly which instructions will be executed when I write in Python.
Similarly, with sufficiently fancy macros, I may not know what LISP code is generated, but if the macros do what they say they do, it can make my code less verbose, more understandable, and easier to maintain. There are times when really understanding what is going on trumps the terseness, and those times I don't use macros.
Also, I love Python. It embeds well in C (which is where my original background is), and it has very good portability, and a good set of libraries.
I also implied, but didn't say straight out that Python has a good reason for not having macros: Part of its design is to look like pseudo-code. See also Norvig's comment to the OP. Macros that operate on text rather than trees (C preprocessor, m4, etc) are far more error prone, and probably a Bad Idea. Therefore if you want your language to look like something other than a tree, you have to forsake macros that operate on code as it is written. I have seen for several languages (Python among them I believe) Lisp-like macros that operate on the AST of the language. They have not caught on. I have several theories why this is so, but right now my preferred one is that it feels too much like you're hacking the compiler, and Compilers Are Scary.