I'm not even sure what this means anymore. I guess I'm just not sure how any language, when used correctly, could be inherently unscalable. My guess is statements like this came from a time when monoliths were the application design of choice? Now, assuming Instagram has just 1,000 photo handling servers, each one is only responsible for 95,000 photos a day.
Of course, that's not to say that Instagram doesn't have CAP issues. It does, especially in the "C" area, but again, not a problem inherent in the language.
When you start getting to a certain scale, developers are cheaper than your server costs in some cases. That is when something being performant is more worth it.
I never missed static typing on large code bases but I had numerous bugs caused by python's implicit type conversions - string to iterable of characters ["h", "e", "l", "l", "o"], None to False, string "no" to True, 0 to False, "" to False, etc.
A lot of this mirrors C's infuriating implicit type conversions.
If you have to turn off a crucial language feature to increase performance, I'm not sure whether a language is considered "scalable".
I suppose that quote could use better wording.
Almost every language can handle typical performance requirements. But when people say slow or unscalable it's almost always in relation to other languages.
Performance has been a concern but programatic loadbalancing has been around for decades. When I worked at MSN/Linkexchange back in the late 90's we never really worried heavily about the performance of the language we used (Perl) because we could scale out servers. Perl isn't that speedy but it sure was easy to develop in. We served a billion and a half clicks per month with 8-10 machines from a single datacenter before I left, with Perl.
I've had no issues in the migration to 3.
I did some building with it around 3.3, starting to commit around 3.4, and with 3.5 I build everything in it.
I don't have the performance challenges Instagram has, but my experience with application development in general is that 98% of performance challenges can be solved with (not-too) clever engineering. This applies to projects in every language.
There are a vanishingly small number of scenarios where the performance of your runtime actually dictates your performance limits.
If you're working on something and are worried about Python's performance, or which Python to use, don't. Use 3, optimize later.
For example, in their codebase they had ambiguity between bytestrings and unicode strings. As Python3 tries to prevent you doing this, to resolve a big footgun from Python2.
The right fix here is to be consistent in your use of strings. Sometimes that is tricky because of how third party libraries have decided to implement their 2/3 compatibility, but it helps prevent shooting yourself in the foot with unicode bugs down the line.
Instagram did not do this. They created utility functions to force their data into the format they wanted at the point it is used. In other places they used tuple() to make sure that map calls that had side effects were fully iterated over.
In short, they had bad Python2 code and now have had Python3 code. Sometimes, at large scale, it's your only choice. But to smaller companies looking at this it's a bad idea. You're setting a precedent in your code that it's okay to make the same mistakes that Python3 tried to prevent.
Neither does, say, Haskell's runtime.
I've been using TypeScript a lot recently and it has had a big impact on reducing bugs and making refactoring easier so something similar for Python looks great.
Description: https://us.pycon.org/2017/schedule/presentation/678/
I don't know this stands out to me in particular, but the 10-year commitment is definitely a big decision. I suppose I've never had to make a similar decision so perhaps this is more common than I think.
It's a good reason to be careful to avoid the latest and greatest, but go for tech that has (some) track record of being maintained and used for a few years.