Wasn't this also shown with anonymized taxi-cab data (released in NY?) many moons ago?
Would it not be possible with knowing that you are tracking this data to funnel people into doing searches in a way that would reveal things?
Directions to the out of state reproductive health clinic, combined with card data would be all it takes to do serious things to people in some states.
Defaults matter. A lot.
Anonymized data is not always anonymous, collected server side or otherwise.
There are many papers on the topic. One of the more popular examples is "Robust De-anonymization of Large Sparse Datasets" using the Netflix Prize Dataset.
>We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world’s largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
https://www.cs.utexas.edu/~shmat/shmat_oak08netflix.pdf
This paper speaks about AOL in 2006, which I think you are referring to: https://digitalcommons.law.uw.edu/cgi/viewcontent.cgi?articl...
However, it should be noted that the AOL dataset had a bunch of stuff that was identifiable by its nature (e.g. people searching for their full names or address), and the dataset wasn't scrubbed of those searches. So the controversy wasn't just re-identification of data, but also just a bunch of already-identifiable data.
>Anonymized data is not always anonymous
More importantly, in my opinion, is that data that is anonymous now is just one other dataset away from not being anonymous anymore.
If anything, I think it's both safer and more accurate to start from the assumption "anonymized" data can be de-anonymized and and require evidence to refute that rather than starting from a place of assumption that anonymization works and then trying to find a way to attack it. In practice, there's just not a good track record of this being done effectively, and I think people should generally be skeptical of whether this is even possible in many cases.
The trouble is that we'd still have to take the word of the entity doing the data collection that they've done this properly, and it's clear that we can't take anyone's word for that.