Not the OP, but it's pretty straight forward for most people (including the author of TFA). You need to identify what private information you collect.
Fair enough.
You need to decide what lawful basis you are using to collect that data. If you have no lawful basis, you have to stop collecting that data.
Right, but probably the most practically relevant basis for anything non-trivial will be legitimate interests, which of course involves balancing tests. Even today, just a week before this all comes into effect, there is little guidance about where regulators will find that balance.
If you are using consent lawful basis, you need to get consent in an opt-in manner. You need to record what statement you have shown to the user and any consent that you receive.
But this is retrospective and stronger than the previous requirement. Even if you have always been transparent about your intentions and acquired genuine opt-in from willing users, you are now likely to be on the wrong side of the GDPR if you can't produce the exact wording that was on your web site or double opt-in email a decade ago. The most visible effect of the GDPR so far seems to be an endless stream of emails begging people to opt in to continue receiving things, even where people had almost certainly genuinely opted in already before.
For legitimate interest (which is essentially exactly the same as the laws that are currently on the books) you need to be able to exclude processing the data if someone objects.
Not quite. There also appear to be a balancing aspects here, though with some additional complications involving direct marketing, kids, and various other specific circumstances.
Take a common example of analytics for a web site. These may include personal data because of things like IP addresses or being tied to a specific account. Typically these have relatively low risk of harm for data subjects, but if for example a site deals with sensitive subject matter then that won't necessarily be the case either.
A business might have a demonstrable interest in retaining that data for a considerable period in order to protect itself against fraud, violation of its terms, or other obviously serious risks. Maybe the regulators will consider that those interests outweigh the risk to an individual's privacy if their IP address is retained for several years, at least in some cases. Maybe they will find differently if it's the web site for a drug treatment clinic than if it's an online gaming site.
Even if the subject matter isn't sensitive, where does the line get drawn? A business that offers a lot of free material on its site to attract interest from visitors might itself have a legitimate interest in seeing who is visiting the site and tracking conversion flows that could involve several channels over a period of months. This is arguably less important than protecting against something like fraud, but nevertheless the whole model that provides the free material may only be viable if the conversions are good enough. But equally, maybe it's not strictly necessary for the operation of the site and whatever services it offers for real money, so should the visitor's interest in not having their IP address floating around in someone's analytics database outweigh the site that is offering free content in exchange for little else in return?
That's just one simple, everyday example of the ambiguity involved here, and as far as I'm aware the regulator in my country has yet to offer any guidance in this area. Would any of the GDPR's defenders here like to give a black and white statement about this example and when the processing will or won't be legal under the new regulations?
The other lawful bases are very unlikely to show up in most organisations.
I would think the basis that you have to comply with some other law is also likely to be quite common. It will immediately cover various personal data about identifying customers and recording their transactions for accounting purposes, for example. But again, since that will include the proof of location requirements for VAT purposes in some cases, how much evidence is a merchant required to keep to cover themselves on that front, and when does it cross into keeping too much under GDPR?
The other main problem is that if you want to use something other than contract basis, you need to build something that allows the user to exercise their rights.
And once again, those rights are significantly stronger under the GDPR, particularly around erasure or objecting to processing. Setting up new systems that comply may not be too difficult, but what about legacy systems that were not unreasonable at the time but don't allow for isolated deletion of personal data? To my knowledge, there is still a lot of ambiguity around how far "erasure" actually goes, particularly regarding unstructured data such as emails or personal notes kept by staff while dealing with some issue, or potentially long-lived data in archives that are available but no longer in routine use. And then you get all the data that is built incrementally, from source control systems to blockchain, where by construction it may be difficult or impossible to selectively erase partial data in the middle.
Not to put too fine a point on that, personally I highly approve of this. I really could care less if somebody's business model is destroyed because it is now too expensive to collect information that you don't need to do the job.
But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
It's easy to say a lot of people might not like being tracked, but on the other hand, if services like Google and Facebook all disappeared in the EU as a result of the GDPR, I'm not sure how popular it would be. There are two legitimate sides to this debate, and neither extreme is obviously correct.