Is this not something of an oxymoron? If there exists an ai that is more intelligent than humans, how could we mere mortals hope to control it? If we hinder it so that it cannot act in ways that harm humans, can we really be said to have created superintelligence?
It seems to me that the only way to achieve superalignment is to not create superintelligence, if that is even within our control.