I think you might be responding to a different point than the one I made. The 1% I'm referring to is a 1% variation between subsequent runs of the same code. This is a measurement error, and it inhibits your ability to accurately compare the performance of two pieces of code that differ by a very small amount.
Now, it reads like you' think I'm saying you shouldn't care about a method if it's only 1% of your runtime. I definitely don't believe that. Sure, start with the big pieces, but once you've reached the point of diminishing returns, you're often left optimizing methods that individually are very small.
It sounds like you're describing a case where some method starts off taking < 1% of the runtime of your overall program, and grows over time. It's true that if you do an full program benchmark, you might be unable to detect the difference between that method and an alternate implementation (even that's not guaranteed, you can often use statistics to overcome the variance in the runtime).
However, you often still will be able to use micro-benchmarks to measure the difference between implementation A and implementation B, because odds are they differ not by 1% in their own performance, but 10% or 50% or something.
That's why I say that Tratt's work is great, but I think the variance it describes is a modest obstacle to most application developers, even if they're very performance minded.