So all the libraries they build up will have these holes in them that make it harder to do things like treat two isomorphic objects as the same -- something mathematicians do implicitly on a daily basis.
You can probably get a long way in Lean with the soundness axiom. But what I don't know is what happens when you build up a decade of software libraries in a system that adds a lot of manual and cognitive overhead when you want to use them.
My gut instinct is that by cutting corners now, they're creating a form of technical debt that could possibly escalate quickly and force mathematicians to reformulate their tools in a nicer way.
This actually happens continuously throughout the history of math. Sometimes it leads to errors like the so-called Italian school of algebra. Sometimes it just leads to pauses while we go back and figure out what the objects we're working with actually are before we can make more progress.
Take all this with a grain of salt: I haven't worked with Lean so I don't know how much this crops up in practice, and I don't know how large Lean libraries are at this point. This is all gut feeling.
But my sense is that what you really want is to get the foundations right, then build abstraction layers on those foundations that are nicer to use. Lean tries to build a "good enough" foundation and historically the gap between what we know is correct and what is seen to be "good enough" tends to show itself sooner or later in math. If you are just working in natural language then you can just forget it was a problem as soon as a fix is found. If you're working in software, though, you'll likely need to do a major rewrite or refactoring.