Imagine being in a trial against Google, knowing that they have access to information for all the participants and their family and friends. Internet searches, phone calls, purchases, browser history, location data, voice recordings, and all the analytic tools to analyze and extrapolate what details can be used for manipulation and sell an argument. How could such trial ever been conducted in a fair way?
Would people still want privacy?
I believe that the answer is yes. Simply because there will always be the fear that things may change. You only need one imperfect law or law enforcer and you need privacy to protect you. There will never be a point where we can give up privacy, even if everything else is perfect.
Also, arguably, if there were perfect laws then the fact people have privacy or don't have privacy would make no difference so we might as well have it.
Privacy seems to be a mitigation strategy. An insurance for the future, as 'jib said. It may seem like it's in your best interest to pursue it. But it is also worth considering whether this isn't a coordination problem - where the locally best choice for an individual is a very bad when aggregated over entire society.
Anyway, in real world we won't get either perfect privacy nor perfect transparency. The question is, in which direction we should move from the status quo? It's a multidimensional problem; there are many components to privacy that could go either way. Personally, I'm in favour of everyone knowing more about everything, and reducing both means and incentives to lie to one another. "Everyone" includes the government too. I want to trust them. But they do have to earn it, and trust is in short supply nowadays.
A free market would pretty much eliminate the need for laws. What's wrong to you, might not be wrong to me. Therefore, you might refuse to help or trade with a person that does X, while I might have no problem with it and even help him.
Moreover, do you have any idea what a world with total privacy would look like? Even without total privacy, how do you even decide where to start (what to make public, what to keep private)? Surely, people will live in fear and keep everything they can private. How would that even work? No Google, no Facebook, no phone directory, all authors will use pseudonyms, etc. Basically, no meaningful connection will be made between people and the IoT will never possibly see the day. We'll all wear masks. Evolution will stop, we'll all die.
> First, let's get rid of the idea that a central government has the authority to create and enfore laws. That's pretty much a recipe for disaster.
That's not a recipe for disaster. That's the universal solution for coordination problems we keep arriving at throughout the history. Creating and enforcing universal laws is the one thing government is actually really good at. The market on the other hand, totally sucks at it. It has its strengths though, and whatever the solution is, it will involve a mix of centralized and distributed responsibility.
I refer you to [0] for an in-depth discussion on coordination problems. It's a long read, but totally worth it.
[0] - http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
In addition, I'm assuming given your distrust for the government and love of markets that all means of communication will be private. Private capital tends to monopoly, so we can expect in your utopia that all means of communication are owned by a small group of people. This group of people has practical control over everyone's data.
This group of people might decide to release everyone's data in order to appease your desire for a post-privacy utopia, but almost certainly they'll keep it for their economic advantage.
The resulting system you are advocating is one in which a tiny percentage of people has near-universal control over the populace.
You can try to equalize things by creating an alternative system in which all data is public, say by having cameras that stream to some publicly accessible resource. But the streaming and the cameras will be the private property of organizations that have a strong incentive for you not to do this. And since the entire infrastructure is owned by such organizations, you would be in violation of your end user agreement to start such a project. It would make you a "liar" when you signed the Internet EULA, which, as you say in another comment, is the worst crime imaginable.
Second, while what you said is true, it would also mean that we wouldn't have a perfect law-making govt then, wouldn't it :)?
And even if it was optional (b/c it wasn't necessary), I'd agree that we should still have it, but that's different than codifying it's necessity in law (by the grace of the very people who want to deny it to you) and different than treating privacy as a panacea for protection against unchecked power.
I don't see law and privacy as competing for priority, I think they're complementary. Part of striving for better law, better law-making and better enforcement is an understanding that law is not always perfect, and improving law is a gradual, iterative process. Privacy - and other legal protections - are failsafes built into the system of law to mitigate the ways is which we anticipate it may fail.
I didn't have this in mind when I used the term "failure states" in my earlier post, but the choice of language comes from the way we build safety critical system. The primary goal is to build a product that improves lives, but part of that process is identifying ways the product could malfunction or be misused, and applying mitigations to prevent the negative outcomes. Ideally, on a prefect product used expertly, they're just extra features that never get used.
The reason they must exist is that our analysis can't assume the hubris of perfect design and implementation 100% of the time.