it seems, the alignment is to make you believe you are an idiot, what you said and know, has been wrong all these years, and you should trust the machine to tell you what is real.
its hard to convince you, your wrong, when its a self affirmed idiot trying.
i really dont see LLMs doing benign things, its a misinformation deluge.
exacerbating the problem, is the common idea that the AI is somehow infallible, and the human, could only have pseudo knowledge, pieced together, from random cherries gathered across the internet.
LLMs have become trolls, trolling for interaction worth training on.