I'm pretty curious about this: have memory access patterns and cache awareness become more of a big deal in the past decade or so, or has it only recently reached my awareness? I studied CS in the early 2000s and while I was aware of various trade-offs between contiguous and linked data structures, it's only in the last few years that the importance of cache (and non-"system" languages' widespread flouting of its importance) has really bubbled up in my consciousness.
It's certainly likely that I just didn't take the right kinds of classes, or that I did but just missed or failed to really internalize this, but I'm curious if something else is going on. Has the bottleneck increasingly become memory rather than CPU? Have we more thoroughly exhausted other areas of optimization?
I was thinking about this while watching a talk[0] given by someone from Google about data structures in the STL, which decried that the standardized specification for `std::unordered_map` more or less forces an implementation to use buckets instead of the more cache-friendly technique of probing. I'm sure that the people who standardized it were very competent, which makes me think that, at least at the time, buckets were thought to be the better implementation.
[0]: https://www.youtube.com/watch?v=fHNmRkzxHWs
Edit: Added video link.