I always thought dynamic typing is a feature for situations where the code needs an extreme amount of flexibility to adapt to a wide variety of data; even at the expense of performance.
Not necessarily. The phrase "dynamic language" actually bundles together a lot of related but distinct ideas: some of those add performance overhead, others don't. "Dynamic" features which are most notable for performance overhead are tagged unions and dynamic dispatch.
Imagine a dynamically typed language which provides types like int, bool, list, functions, etc. and we're free to assign (and reassign) any of those values to any variable we like:
x = 5 # An int
y = true # A bool
y = 3 # An int
z = square(x + y) # An int
It's very common to store such values as a tagged union: a "union" meaning that the data could represent an int, or a bool, or whatever, and "tagged" meaning that there's some extra data which tells us which one it is. For example, we might store `5` as a pair of machine words: `1` to indicate that it's an int, and `5` to represent the data. The boolean `true` might be the pair `2` to indicate bool and `1` for the data. And so on.This tagging adds overhead. We can reduce it in some cases, e.g. using "tagged pointers" where we use some of the bits in a word for the tag and some for the data. But it's still overhead.
However, there's nothing fundamental about using tagged unions. We could just as easily use 'unboxed' values, i.e. store the int `5` as the machine word `5`; the bool `true` as the machine word `1`; etc. There is no overhead, no metadata, etc. Whilst this is a perfectly reasonable implementation strategy, it means that we cannot tell what type a value is intended to have: e.g. if a value is stored as the machine word `1`, we have no way of knowing if that's the boolean `true`, or the integer `1`, or the character `SOH`, or whatever. This would probably lead to very buggy programs, since we have no type checker to enforce correct usage, and (since there's literally no difference between data of different types) we can't even do runtime assertions like `assert(isInt(x))`.
Likewise, many dynamic languages use dynamic dispatch to choose which functions to call based on the type of data we have. In the above example, we might have `x + y` running an integer addition function when `x` and `y` are integers, or a string concatenation function when `x` and `y` are strings, and so on. That's what Python and Javascript both do. Yet, again, there is no fundamental reason to do this! We can have a dynamic language which uses static dispath everywhere: consider that in PHP, `+` is only used for numerical addition, whilst `.` is only used for string concatenation. Dynamic dispatch adds overhead, since we need to chase pointers, etc. whilst static dispatch doesn't. This is simply a question of programmer convenience: do we want every function to have a distinct name, and write them out in full each time?
Note that both of these features: tagged unions and dynamic dispatch, can be used in static languages too. It's just that, historically, they tend to be default (and hence unavoidable) in dynamic languages, and something we must explicitly create in static languages. Hence when we compare static languages to dynamic ones, we tend to compare statements like `x + y` in Python to `x + y` in C, which are actually quite different semantically. A fairer comparison would be to compare `x + y` in Python with `callWith(lookUpSymbolFromType("+", lookUpType(x)), x, y)` in C (I've made up those function names, but the point is that the same functionality is there if we want it, but it will be just as slow as if we'd used Python)
Perhaps I should have written "... nontrivially statically typed."