story
Also, people have been commenting assuming Google doesn’t want to offend their users or non-users, but they also don’t want to offend their own staff. If you run a porn company you need to hire people okay with that from the start.
The idea that most people use any coherent ethical framework (even something as high level and nearly content-free as Copenhagen) much less a particular coherent ethical framework is, well, not well supported by the evidence.
> require that all negative outcomes of a thing X become yours if you interact with X. It is not sensible to interact with high negativity things unless you are single-issue.
The conclusion in the final sentence only makes sense if you use “interact” in an incorrect way describing the Copenhagen interpretation of ethics, because the original description is only correct if you include observation as an interaction. By the time you have noted a thing is “high-negativity”, you have observed it and acquired responsibility for it's continuation under the Copenhagen interpretation; you cannot avoid that by choosing not to interact once you have observed it.
I don't have any evidence, but my personal experience is that it feels correct, at least on the internet.
People seem to have a "you touch it, you take responsibility for it" mindset regarding ethical issues. I think it's pretty reasonable to assume that Google execs are assuming "If anything bad happens because of AI, we'll be blamed for it".