I think you may want to read the quote from the paper a bit more carefully. "...he will be wise to look carefully at the critical code; but only after that code has been identified"
I was told that "premature optimization is not really a thing" as a response to a reply I received that pImpls should be avoided at all costs.
When we analyze the performance impact of software, we don't shotgun change things because of a generalized fear of cache misses. We examine critical code paths and make changes there based on profile feedback. That is the spirit of what Knuth is saying in this quote. Look carefully at critical code, BUT ONLY AFTER that code has been identified.
A cache miss is critical when it is in a critical path. So, we write interfaces with this in mind. Compilation time matters, as does runtime performance. Either way, we identify performance bottlenecks as they come up and we optimize them. Avoiding clearer coding style, such as encapsulation, because it MIGHT create faster code, is counter-productive.
We can apply the Pareto Principle to understand that 80% of the performance overhead can be found in 20% of the code. The remaining 80% of the code can use pImpls or could be rewritten in Haskell for all it matters for performance. But, that code still needs to be compiled, and often by clients who only have headers and library files. Subjecting them to long compiles to gain an unimportant improvement in performance in code that only rarely gets called is a bad trade. Spend that time optimizing the parts of the code that matter, which, as Knuth says, should only be done after this code has been identified.
EDIT: the downvoting on this comment is amusing, given that "avoid pImpls" is exactly the sort of 97% cruft that Knuth was addressing.