I am a high decoupler. I generalize things like "analogy to self, self is human" to "analogy to self, self is category X" in order to improve my cognitive abilities by gaining abilities which have reach beyond the confines of what I have previously seen. So when you try to stick with just humans, I'm not with you anymore, because your models seem highly coupled. I find that to be a bad property. I seek to avoid it. I consider it to be incorrect.
In my model, when you talk about anthropomorphism, seemingly as a negative, I realize I've noticed things which a coupled model doesn't predict: that intentional error via anthropomorphism can not just be correct, but that your scare quotes around rational while trying to denigrate the idea that it can be correct could not be more wrong, because the hard to vary causal explanation of why we ought to anthropomorphize gives a causal mechanism for why we ought to which is intimately tied in, not with being irrational, but with being more rational.
I realize this sounds insane, but the math and empirical investigation supports it. Which is why I think it is worth sharing with you. So I'm trying to share a thing that I consider likely to be very surprising to you even to the point of seeming non-sensical.
Would you like a link to an interesting technical talk by a NIPS best paper award winning researcher which delves into this subject and whose works advanced the state of the art in both game theory and natural language applied on strategic problems in the context of chat agents? Or do you not care whether anthropomorphism, when applied when it shouldn't be according to the analogical accuracy that usually decides whether logical analogy can be safely applied might be accurate beyond the level you thought it was?
I am not trying to disagree with you. I'm trying to talk to you about something interesting.