https://github.com/emilk/ram_bench
I am not sure if any serious academic work has been built on this model, but it's a nice short hand.
Naively, to achieve optimal access time, you can pack your memory within a sphere of radius R, and R=O(N^(1/3)).
But, for large R you start having cooling problems. If each memory element needs some power P to operate, then the total power consumption is P×N = O(R^3). But your area is only 4pi R^2, so the power flow per unit area is O(R)=O(N^(1/3)). So if it has large radius, and it has limited thermal conductivity, your memory will melt (since temperature ~ power flow^(1/3) (Plank's law)).
The threshold for stable temperatures at any radius is memory access as O(N^(1/2)).
This analysis is valid for general computing and circuits, but since computers are usually modeled as memory machines I think that's sufficient (?).
Obs: Why, or how, is the human brain roughly spherical then? Because we have a very effective (water based) cooling system. Still, if it got large enough, and you admit limited flow rates of water and such, cooling eventually would be limiting. If you immediately thought of elephants, so did I, and this may be linked to their fantastic large ears:
https://asknature.org/strategy/large-ears-aid-cooling/
I love how everything is connected.
Obs2: Yes this is related to the Bekenstein bound, but much more relevant of course (because existing RAM is almost thermally limited and you need black hole densities to achieve bekenstein bound). The memories we use are organized in (mostly) flat packages.
Furthermore, the introduction in the article you linked misunderstands Big-O notation so incredibly fundamentally that I don't think the author has done their background reading on machine models and Big-O notation.
Not that nmodel specifically, but cache-oblivious data structures are specifically designed to scale well in a hierarchical cache model, no matter the cache block size. So they scale excellently across L1 cache all the way down to hard disk.
[1] https://algo2.iti.kit.edu/sanders/courses/algen20/vorlesung.... slide 180-207, there are some up-to-date measurements on slide 204
[2] https://nms.kcl.ac.uk/stefan.edelkamp/lectures/ae/slide/AE-A...
There are also parallel priority queues which might be useful depending on your problem, especially if it can be reformulated to operate on batches ("give me the 20 smallest items", "insert these 50 items").