> I just think the language of motivation and desire makes it harder to see the risks, because it introduces irrelevant questions into the the conversation like "how can machines want something?"
Yet discourse on existential AI risks is predicated on something like a "goal" (e.g. to maximise paperclips). Notions like "goal" also make it harder to see clearly what we are actually discussing.
> the term "intelligence" seems pretty neutral
Hmm, I'm not convinced. It seems like an extremely loaded term to me.