Using that they could add multiple redundancies and they wouldn't need to look at your stuff on the cloud at all before getting multiple positive matches. And even then the first level is a human checking if it's an actual match or a false positive.
This was somehow a huge invasion of privacy, when people were competing on who could misunderstand the very simple premise the most.
Fairly sure that most of the worry around that was because such a system could very easily be changed to do the same to any photo.
And people felt like their phone wasn't theirs and that it could snitch on you. We know that you truly do not own your phone, but most people do not view it that way.
Sure, it is technically better than doing that check on on a server, but the general public do not currently view it that way.
Personally do not like the system as you would be unable to escape it if it started scanning local photos (which I feel is only a matter of time), something you can with google drive and such, by not using them.
In this case, the steelman is that Apple has turned a capability barrier (if your scanning is on the cloud, you simply cannot scan local photos) into a policy barrier (now you can scan all photos, there's just a flag in the software which means you don't do so.)
This is not the case. People are guessing about how it works and getting it wrong. The device doesn’t know if there’s a match or not. The logic is not “if there’s a match, tell Apple”, the logic is “attach a safety voucher to every iCloud upload and let Apple figure it out on the server”. You can’t flip a switch and just run it against all photos on the device – the iCloud upload is a part of the design. If Apple wanted to scan all the photos on your device, they would have picked a different design for this. If Apple change their minds and want to do this in the future, they need to redesign how this works, it’s not just a policy decision.