I would take the view that their safety team is maybe focused on the wrong things (former) and has been captured by extremists instead of pragmatists, but that's like just my opinion man. I'll use Anthropic and Venice until I notice less steering in my threads, personally. An GPT that constantly eggs me on isn't a thought-partner, it's a dopamine device. If I'm going to outsource my thinking to an LLM I need something I trust won't put it's own spin on things or gas me up into taking action I never originally intended to do without critical thinking first.
If they were and did, they sure bear responsibility for what happened
I'm being selfish here! I am confident that no AI model will convince me to harm myself, and I don't want the models I use to be hamstrung.