Couple of notes. First, have you seen what real-world large-scale OO looks like? Dude, it ain't pretty. We're swimming in projects that are staffed at 10x or 20x the number of coders they probably really need, and a lot of time is spent in support activities.
Second, maybe the real question here isn't one of scale, it's how the model falls apart under strain. A good encapsulated system should offer you a bit more defensive programming protection than an FP one. But if you're using compsable executables, while testing at the O/S level? Meh.
I know why we went to OO. And let's not forget OOAD, a beautiful way of joining analysis and design about the problem with the solution itself. I'm of a mind that OOAD might live on even once the world moves to mostly FP.
I think maybe that OOP was a stopgap position; a place to try to build a firewall against the encroaching overstaffing associated with Mythical Man-Month management. Now that programming is a commodity, however, we're seeing diminishing returns. Don't know.
I am much less convinced of the "We need OO for large scale projects" argument now than I was two years ago. I expect this trend to continue. We might be able to solve the scaling problem with things like cyclomatic complexity limits on executables, or DSL standards. Not sure of the answer here, but I think we're going to find out.