It's way worse than that.
First, We interact with LLMs through private conversation and we are used to have private conversation with human we trust. Some of that trust will be transfered to LLMs.
Second, LLMs have a vastly bigger "mental" power to build a long term mental model of us, while we interact with them. Which mean they can chose with extreme precision their words to trigger an emotion, a certain reaction.
Combine the two and the potential for manipulation, suggestion, preference altering is through the roof.