"Apple" wasn't scanning your phone, neither was there a "backdoor".
If you would've had iCloud upload enabled (you'd be uploading all your photos to Apple's server, a place where they could scan ALL of your media anyway), the phone would've downloaded a set of hashes of KNOWN and HUMAN VERIFIED photos and videos of sexual abuse material. [1]
After THREE matches of known and checked CSAM, a check done 100% on-device with zero data moving anywhere, a "reduced-quality copy" would've been sent to a human for verification. If it was someone sending you hashbombs of intentional false matches or an innocuous pic that matched because some mathematical anomaly, the actual human would notice this instantly and no action would've been taken.
...but I still think I was the only HNer who actually read Apple's spec and just didn't go with Twitter hot-takes, so I'm fighting windmills over here.
Yes, there is always the risk that an authoritarian government could force Apple to insert checks for stuff other than CSAM to the downloaded database. But the exact same risk exists when you upload stuff to the cloud anyway and on an even bigger scale. (see point above about local checks not being enabled unless iCloud sync is enabled)
[1] It wasn't an SHA-1 hash where changing a single bit in the source would make the hash invalid, the people doing that were actually competent.