To give background, my main concern is with the exceptionalism: the defining (good) thing about STEM is that there's actual logic to the discipline, where if I forget an answer, I can re-derive it. It's not just a game of memorization and regurgitation.
But O(1) for hash lookup feels like the old soft-sciency "hey, you just have to memorize this". I don't have one consistent model that lets me derive bounds, and which produces O(1) for hash lookup and O(log n) for trie lookup. The other convention of constant seek times is unrealistic [2], but it's at least consistent. This isn't.
Rather, it's supposed to be a big-O table. If these were presented as "in-practice" bounds, with its own consistent logic about which resources are scarce relative to which, that would be a different story. Then I would have a consistent set of conventions that produce O(1); it wouldn't come off as "oh, we assume the rules don't apply here".
>But saying that such a thing is possible is a simplifying assumption that we make, in general, for non-numeric algorithms: arithmetic is O(1).
Where? Every treatment I've seen takes arithmetic as linear in the number of bits. [1] This is still consistent with O(n log n) for sorts because you can assume a bound on element size, and still only have constant factor overhead.
>Of course, real programs cannot do this for arbitrarily large numbers. So if you are actually concerned with arbitrarily large numbers, you must do something else. Thus, other sorts of analysis which would give you a different number for hash table complexity. Yay!
But (as above), the problem is it's not even clear what sort of problem class we were doing that let's you assume this particular thing can be bounded but others can't.
[1] https://en.wikipedia.org/wiki/Computational_complexity_of_ma...
[2] Pretty sure that in the physical universe, memory seek requires n^(1/3) because you have to cram it into a volume whose size increases with the cube of longest distance between addresses, and travel time is linear in distance.