Then that means he has it right and you might have it backwards: the problems are the massive amount of failure states for which privacy is a protection.
Imagine being in a trial against Google, knowing that they have access to information for all the participants and their family and friends. Internet searches, phone calls, purchases, browser history, location data, voice recordings, and all the analytic tools to analyze and extrapolate what details can be used for manipulation and sell an argument. How could such trial ever been conducted in a fair way?
Would people still want privacy?
I believe that the answer is yes. Simply because there will always be the fear that things may change. You only need one imperfect law or law enforcer and you need privacy to protect you. There will never be a point where we can give up privacy, even if everything else is perfect.
Also, arguably, if there were perfect laws then the fact people have privacy or don't have privacy would make no difference so we might as well have it.
I don't see law and privacy as competing for priority, I think they're complementary. Part of striving for better law, better law-making and better enforcement is an understanding that law is not always perfect, and improving law is a gradual, iterative process. Privacy - and other legal protections - are failsafes built into the system of law to mitigate the ways is which we anticipate it may fail.
I didn't have this in mind when I used the term "failure states" in my earlier post, but the choice of language comes from the way we build safety critical system. The primary goal is to build a product that improves lives, but part of that process is identifying ways the product could malfunction or be misused, and applying mitigations to prevent the negative outcomes. Ideally, on a prefect product used expertly, they're just extra features that never get used.
The reason they must exist is that our analysis can't assume the hubris of perfect design and implementation 100% of the time.