Here is an oversimplified model to illustrate my basic point:
A dependency introduces some constant amount of risk (d) that does not vary with the size of the dependency. Every line of code you write yourself also introduces a much smaller constant amount of risk (y).
If you introduce a separate dependency for every line of code in a 1000-line project, your risk is 1000d.
If you can pull in someone else's code for the whole thing and don't need to write any code yourself, your risk is d.
If 200 lines of your code can be replaced with an external library, your risk is d + 800y.
I think the real disagreement here is over the value of d. My experience leads me to put the value of d pretty high relative to y, so to me 1000d is the worst possible case. If someone sees d as equal to y, then they'd see dependencies as no problem whatsoever.
(Obviously in reality the risk of a dependency is not really constant - it's probably more like d + 0.1y or d + 0.01y or whatever, since a 10-line dependency is less risky than a 1000-line dependency. Hopefully my point still stands.)