> We can deal with the implications of dangerous AI if and when it becomes a problem.
What makes you assume that? We haven't been able yet to deal with the repercussions of globalised social media, we don't even completely understand its impacts. Or dealt with the impact of climate change.
AI seems like a much more encompassing and transformative technology than social media, what makes you assume we will be able to deal with its problems in a timely fashion when they inevitably occur? We might as well not be able to, and as usual, unintended consequences will follow.
> Fear of the unknown shouldn’t be allowed to stop scientific progress.
Scientific progress at any cost while being irresponsible about major consequences of it shouldn't be allowed either, it needs to be a balancing act, just pushing forward without even assessing the risks is a stupid game.