You think so? I think if he thought hard takeoff was a real risk, he would be devoted to actually making their research "open", in the original sense of the word when Musk was in charge. According to the hard takeoff theory, it is more likely to occur due to "hardware overhangs", where big organizations accrue lots of compute infrastructure over time, but AI progress occurs in sudden jumps. Then all that infrastructure is just sitting there waiting to be eaten up by a hungry AI. These sudden jumps are more likely to occur if AI research is generally undertaken in a secretive manner, with leaks or espionage resulting in staggered spread of knowledge. This is at least my understanding.
I'm not up on all the epicycles in hard takeoff theory, so I'd be happy to hear more from you or other people who have followed it into more fantastical territory.