It is not necessarily +10000, it can also be something like x5000.
Because CPUs really, really, really like working short, simple, predictable loops that traverse data in a simple pattern and hate when it is interrupted with something like dereferencing a pointer or looking up a missing page.
So your super complex and super intelligent algorithm might actually be only good on paper but doing more harm to your TLB cache, prefetcher, branch predictor, instruction pipeline, memory density, etc.
So there is this fun question:
"You are generating k random 64 bit integers. Each time you generate the integer, you have to insert it in a sorted collection. You implement two versions of the algorithm, one with a singly-linked list and one with a flat array. In both cases you are not allowed to cheat by using any external storage (indexes, caches, fingers, etc.)
The question: in your estimation, both algorithms being implemented optimally, what magnitude k needs to be for the linked list to start being more efficient than array list."
The fun part of this question is that the answer is: NEVER.