There are trade offs that can be made that are inherent to the space.
There are a handful of hashing methods in papers, and each can have its parameters tuned, again making trade offs for things like efficiency or accuracy.
Then when it comes to efficient searching through hashes for matches and fuzzy matches, there are common algorithms and data structures used across perceptual hashing systems, each with with their own trade offs, implementation details and parameters that can be tuned.
If there's an issue with PhotoDNA that doesn't come down to a poor implementation, then there's a good chance that other systems might have met the same pitfalls they did. And if it comes down to a poor implementation, it would be prudent for operators of other systems to make sure their own systems don't make the same mistakes.