> My argument was and is that specifically and narrowly in conversations about AI and its impacts on the livelihoods of people, calling for empathy from others is deliberate emotional manipulation with the objective of cudgeling people into compliance. It's also often a rallying cry, used to de-legitimize the positions of their counterparts as cold, uncaring, unfeeling, and devoid of humanity. Common enough ad-hominem material, though obviously fully emotionally justified.
Sorry, no. You're conflating two very different things. Saying it's unethical to entirely disregard people's livelihoods when deploying new technology doesn't even resemble an ad-hominem attack. Saying that anybody who has an opinion that doesn't match yours is arguing in bad faith IS an ad-hominem. For example: equating advocating for considering people's livlihood as being emotionally manipulative, full stop. It's pretty ironic that you're accusing them of not respecting opposing opinions.
> The only broader comment I offered was that I have worked with a number of people who operate that way on a daily basis. I understand why. It helps them hit their professional goals, ensuring their livelihood and those of their families. Many of them see a resistance to these tools as a lack of respect for their skills, abilities, hard work, or reasonable economic interests.
I don't know anything about your daily life, but I do know that trying to apply the motivations of the difficult people in your life to others you find similarly difficult is pretty intellectually lazy.
> As to personal respect - how prepared are you to see respect in someone saying they understand but disagree?
I do it all the time. I'm always the one trying to get people to see things from someone else's perspective. In this topic, specifically-- I strongly argue with people who think these technologies need to be locked down and left to huge companies solely to protect people's careers. I argue with developers who don't think they need to consider anything beyond technological advancement when deploying technology. Both opinions completely fail to acknowledge the needs of other people and the consequences for protecting their self-interest.
> I can both understand the impact of my words on other people and refuse to shift my position because of their emotional response.
Ok, great. Unless you prove that's true of everybody else, that doesn't address what I initially said.
> Many others may not react as well as you to a compassionate, empathic, kind, caring, and understanding essay arguing that creatives feeling threatened by AI should seriously consider a change of career.
You're right. Many others might not react well to a respectfully worded essay that said those things. You'll notice that I never, ever said that people who advocate for balancing technological advancement with the needs of the people replaced by it are fundamentally unethical. At all.