What should they do if the neuralhash matches CSAM? Should they trust that the nerualhash actually matches real CSAM and you will should be reported to the police? That's clearly wrong, since there are going to be false positives, by design.
The whole point for this is to be a probabilistic filter, so that they need to run the real CSAM scanner on a subset of files.
You can fall into two camps:
a) apple should never ever scan my private images I upload on their cloud.
b) apple can scan the images once they reach their servers.
If you pick (a), then clearly neuralhash shouldn't exist and you can argue against that on the ground that you want utter privacy. But you have to be consistent:every other cloud service that does scan the images server side should receive the same critique.
If you pick (b), then you must recognize that this additional machinery doesn't increase their reach to your private data, but quite the opposite, it allows them to implement e2e encryption for 99.9% of your content. You may argue that it's unnecessary and confusing and spooky and be afraid of the slippery slope precendent for other uses.