There is a political economy as well as a technical aspect to this that present inherent issues. Even if we can address the former by say regime change, the latter issue remains: the domain is technical and cognitively demanding. Thus the practitioners will generally sound sane and rational (they are smart people but that is no guarantee of anything other than technical abilities) and non-technical policy types (like most of the remaining board members at openAI) are practically compelled to take policy positions based either on ‘abstract models’ (which may be incorrect) or as after the fact reaction to observation of the mechanisms (which may be too late).
The thought occurs that it is quite possible that just like humanity is really not ready (we remain concerned) to live with WMD technologies, it is possible that we have again stumbled on another technology that taxes our ethical, moral, educational, political, and economic understanding. We would be far less concerned if we were part of a civilization of generally thoughtful and responsible specimens but we’re not. This is a cynical appraisal of the situation, I realize, but tldr is “it is a systemic problem”.