> Can you give specific examples and prove that they cannot be overcome?
It's hard to prove a theoretical negative, but perhaps by comparison with the run time performance of static AOT (SAOT) compiled languages I can show what I mean.
Dynamic typing:
- Python requires dynamic type checking before any program work is done. SAOT doesn't need to do any run time work.
- Adding two numbers in Python requires handling run time overloads and a host of other complexities. SAOT is a single machine code instruction that requires no extra work.
Boxing values:
- Python value boxing requires jumping about in heap memory. SAOT language can not only remove this cost but reduce it to raw registers loaded from the stack and prefetched in chunks. This massively improves cache performance by orders of magnitude.
Determinism:
- In Python program operation can only be determined by running the code. In SAOT, since all the information is known at compile time programs can be further folded down, loops unrolled, and/or SIMD applied.
Run time overheads
- Python requires an interpretive run time. SAOT does not.
In summary: Python necessarily requires extra work at run time due to dynamic behaviour. SAOT languages can eliminate this extra work.
I do understand though that with JIT a lot of these costs can be reduced massively if not eliminated once the JIT has run through the code once. For example here they go through the painful process of optimising Python code to find what is actually slowing things down, to the point of rewriting in C: http://blog.kevmod.com/2020/05/python-performance-its-not-ju...
At the end they point out that PyPy gives a very impressive result that is actually faster than their C code. Of course, this benchmark is largely testing unicode string libraries rather than the language itself and I'd argue this is an outlier.
> How much of the literature have you read?
Literature on speeding up Python or high performance computing? The former, very little, the latter, quite a lot. My background is in performance computing and embedded software.
I'm definitely interested in the subject though if you've got some good reading material?
> people said monkey-patching in Python and Ruby was a hard overhead to peak temporal performance and fundamentally added a cost that could not be removed... turns out no that cost can be completely eliminated.
This really surprised me. Completely eliminated? I'm really curious how this is possible. Do you have any links explaining this?