Premature optimization is not really a thing but foreclosing future avenues of optimization definitely can be.
Okay. I'm going to stop this thread right there and take some opportunity to provide some mentoring. I hope you accept this, as it will help in your career.
Read this paper. It is a classic.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified"
We know that cache misses are not a small in-efficiency. This has been measured & observed on many real systems as a real, systemic problem. It's why data-oriented design is currently the King of game engine world, because cache misses kill performance. It is not premature to warn against it as a general practice as a result, as that's systemic de-optimization that will likely impact the critical 3%.
I was told that "premature optimization is not really a thing" as a response to a reply I received that pImpls should be avoided at all costs.
When we analyze the performance impact of software, we don't shotgun change things because of a generalized fear of cache misses. We examine critical code paths and make changes there based on profile feedback. That is the spirit of what Knuth is saying in this quote. Look carefully at critical code, BUT ONLY AFTER that code has been identified.
A cache miss is critical when it is in a critical path. So, we write interfaces with this in mind. Compilation time matters, as does runtime performance. Either way, we identify performance bottlenecks as they come up and we optimize them. Avoiding clearer coding style, such as encapsulation, because it MIGHT create faster code, is counter-productive.
We can apply the Pareto Principle to understand that 80% of the performance overhead can be found in 20% of the code. The remaining 80% of the code can use pImpls or could be rewritten in Haskell for all it matters for performance. But, that code still needs to be compiled, and often by clients who only have headers and library files. Subjecting them to long compiles to gain an unimportant improvement in performance in code that only rarely gets called is a bad trade. Spend that time optimizing the parts of the code that matter, which, as Knuth says, should only be done after this code has been identified.
EDIT: the downvoting on this comment is amusing, given that "avoid pImpls" is exactly the sort of 97% cruft that Knuth was addressing.
Again, no, it isn't. You seem to be severely underestimating the systemic impact of cache misses if you are considering them a "small" impact to efficiency.
It's a well-proven, well-known problem. Ignoring it falls under Knuth's guidance of "A good programmer will not be lulled into complacency by such reasoning."
pImpls are the sort of thing you use at API boundaries to avoid leaking implementation details into users, but that's trading efficiency & complexity for a more stable API boundary. Scattering them throughout your code base would be like compiling with -O0. It's a silly, nonsensical waste of a user's constrained resources for a slight gain at compile time at a cost of code complexity.
Or, alternatively, using pImpls to optimize for compile time is a premature optimization. You should only optimize for compile time at most 3% of your source files, ever. The other 97% of your source files should be written for clarity & simplicity, which means no pImpls.
No one is claiming that we should ignore performance all together. But, understanding through profiling where performance issues are and designing toward a faster implementation is more important than trying to inline definitions up front.
Some implementation details are so well understood that you really don't need a profiler to do what is probably the right thing by default.