I've worked with a lot of engineers that considered anything O(n^2) to be a red flag, and half the time the actual performance profiling favored the naive method simply because the compiler optimized it better.
That means that if you actually care about performance, you've got to spend 30 minutes profiling for most real world scenarios. Yeah, O(n^2) is obviously a crazy bad idea if you ever expect to scale to ten million records, but the vast majority of software being written is handing tiny 10K files, and a very large chunk of it doesn't care at all about performance because network latency eclipses any possible performance gain.