It is a fundamental problem. Consider the following
- in 2-3 years, it will be cheap enough and powerful enough for enormous, state sponsored agentic systems to monitor every single camera and satellite feed at once, globally. It will be the most intense state surveillance technology the world has seen. Consider Stasi needed hoards of informants and people in vans sitting outside your house. Patriot act surveillance had 2000s technology.
- We already have censorship and state values in Chinese models (and have for awhile, ask Qwen about “sensitive” issues like Taiwan)
- I think you will see more and more governments putting their finger on the scale and exerting more control on alignment. They view it as existential and too risky to trust Silicon Valley nerds to not screw up the technology for what they want to use it for which is violence (war, domestic spying and policing).
- we’re in a golden age where things have not gotten too bad. But e.g. we’re already seeing Palintir do this in Ukraine trying to get AI to work for e.g. drone warfare with what they claim is mixed success.
- the technical problem of alignment conditions on one or more value systems (e.g. people work on conditional alignment of models to more alignment systems, inferring which one from user behavior). That does not remove the ugliness of being forced to push the model towards value systems that are not contradictory and arguably unethical