>We're happy with this but not happy with the other?
Yes?! What's hard to understand about the difference between:
1) An application using AI to scan photographs to provide categorization benefits to the owner/operator/user
2) An application using AI to scan photographs to provide accusation and punishment to the owner/operator/user
...especially when feature #1 can be turned off, but feature #2 cannot be turned off?
iCloud mischaracterizing a baby picture as a "dog" might cause some dinner table chuckles, but it's never going to cause meaningful harm. iCloud mischaracterizing a baby picture as a child abuse image can VERY plausibly cause extremely severe harm.
As a matter of principle, my devices shouldn't be designed to act against my will as an active informant for the authorities against me. The point that they do is the point that I join the flannel and wooly beard set out in the mountains eschewing technology and living "off the grid".