(By using this, G--gle will hound you with impossible capchas and privacy-forward search engines will think that you're a bot.)
Most of these fingerprinting vectors are from the browser and the scourge of JS. I wonder about the footprint left by a user who doesn't use a browser, or instead uses some kind of parsing client that just fetches HTTP or data from API endpoint. Aside from the IP address, there are other ways to fingerprint a user based on network requests from various protocol leaks, many are presented here [1][2]. Are there any leak vectors missing from this list?
[0]: https://github.com/pyllyukko/user.js
[1]: https://www.whonix.org/wiki/Protocol-Leak-Protection_and_Fin...
A friend of mine had an alternative theory: by making it very very hard to track you, you have shown to be somewhat intelligent and you have shown that you know more about computers than the average Joe. Goolag, knowing you're a somewhat intelligent human, gives you the very hard captchas it needs to train.
If Google has a way to verify correctness of an answer, which they seemingly would have to - then would it matter? Smart person or smart bot - if they are given a tough question and answer correctly Google gains confidence in the answer _(or however it works haha)_.
Because really i don't think Google cares at all about bots. They just want data. And the captcha system is an impressive system to pull training data out of intelligent beings/code.
Surely if you want to hide your need to look like everyone else and hide in the crowds.
I don’t know how Firefox does it but instead of trying to make the WebGL fingerprint the same for everyone every time they could also try to make it unique for everyone every time and it would have the same effect.
If every time you loaded a page your WebGL fingerprint differed then a website can’t use that to tell if it was the same browser that loaded the same page previously or any other page anywhere else previously.
(Assuming that the WebGL fingerprint anonymization was so good that it could indeed not be correlated between different fingerprints in any meaningful way.)
You see, a while back we decided to allow writing CSS that changed the design of a website based on the size of it's containing viewport. This is called "responsive design" and is very useful; however, it also means that websites rely on having a correct window size in order to display content correctly. We cannot be inconsistent about our lies: if we were to, say, lie about the screen resolution but still handle media queries faithfully, then not only can the fingerprinter see through our lie, it can use the fact that we lied as extra information. (Remember how DNT served as an effective tracking indicator?) So that would mean browsers would have to start, say, snapping browser windows to certain common viewports or capping the number of distinct breakpoints a website's CSS is allowed to have; both of which have UX or compatibility implications.
1. Get a gazillion users on your site
2. Require a user account tied to a real person
3. Log IP, host, geolocation, and as many JavaScript/browser APIs as you can (there are hundreds at this point)
4. Among the fields you track, find the ones that ones that are the most stable and unique over time
5. Assign some probabilities to these fields to eliminate false positives
6. Generate personas for users for when they are at home, work, one their phone, etc.
That's fingerprinting, traditionally. Hence, the "Cookieless tracking" header right there on the page. If you are tying in other data, that's data aggregation for your business case and is fundamentally unrelated.
I mean, generating personas and whatever "false positives" mean, has nothing to do with fingerprinting. If you cant differentiate from an anon user to another, that's data too.
8. Give up and stop at 2 instead.
Not only them. It is available to the masses[1] and I am afraid GDPR has given this trend a boost.
[0]: https://coveryourtracks.eff.org/static/browser-uniqueness.pd... [1]: https://coveryourtracks.eff.org/ [2]: https://restoreprivacy.com/browser-fingerprinting/ [3]: https://browser-fingerprint.cs.fau.de/statistics?lang=en
Since the same URL was posted as long ago as 2016 (https://news.ycombinator.com/item?id=11266172), it's not clear how current this is.
I think Privacy Possum has prevented fingerprinting. I'll disable it for now, but I do wonder what this would mean for the results of the study.
I think it should eventually complete, since mine did and I use a similar constellation of privacy plugins -- I think there is a timeout that eventually occurs once the various fingerprinting methods fail.