Every single UEFI computer sold today has a unique serial number (GUID). There are MAC addresses. There are HDD serial numbers. There are zillions of unique identifiers accessible to the operating system. Various copy protection schemes use one or more of these. But what they all have in common is that they are under the control of the OS. A privacy-conscious OS can forbid access to these identifiers for userspace applications, or can fake them to something else. This is how e.g. sandbox environments like the App Store can force apps to use some kind of "advertising ID" for this stuff, and ensure that apps aren't sneakily fetching some true unique system ID.
But with the PIII serial number, userspace apps can fetch it without the OS knowing about it. And the disable bit is a one-time operation, so it is not possible to grant serial number access to some apps and not others. This leads to a situation where any arbitrary unprivileged userspace app can uniquely identify your machine, and where vendors relying on this feature might compel you to leave it enabled (e.g. DRM). Now random apps running under an untrusted user can fingerprint your machine, just because you want to watch Netflix.
And that is why this design was utterly broken and a privacy nightmare. Not because it's a unique ID. We have tons of those.
* VMs can trap CPUID, but of course VM support came later anyway.
On Intel trapping CPUID is also possible without VMs since Ivy Bridge. (Linux exposes it by arch_prctl(ARCH_SET_CPUID))
But core rr functionality works just fine on Zen CPUs, after MSR to disable some determinism-breaking speculations was discovered. These problems were not related to CPUID.
Thought it was interesting that they did that but didn’t think much more of it. I don’t even remember what the promo was. Might have just been extended warranty or something?
from spinning up a VM
...and you can change them, even if not easily, for a VM. AFAIK the Windows licensing/activation relies on the same uniqueness.
Also, I have a Sonos system and it works great!
The stolen part I get, but did it used to be easier to counterfeit chips? There's a lot that goes into making something that looks like a PIII, and even then, I assume Intel had state-of-the-art fabs, so I'm surprised this was a concern.
The hardware scams I've heard of stamping better specs on something, for hard drives, a firmware hack that makes it appear to be higher capacity, and unauthorized hardware made in off-hours on the same production line.
https://www.youtube.com/watch?v=wsKjX6UGYUQ
Intel was so pissed someone else was making money on binning they locked multiplier on later 133/166/200, all Pentium MMX and later models ending easy overclocking.
https://forums.anandtech.com/threads/pentium-200mhz-multipli...
For an interesting, albeit slightly unrelated read, see:
https://en.m.wikipedia.org/wiki/Chip_art
>"Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived."
Irrelevant now with the switch to ARM, but still pretty interesting they out and out state it.
https://www.csoonline.com/article/3220476/researchers-say-no...
The ME, on the other hand, is obviously good since it "allows" you to watch 4K Netflix on your PC.
The ME has nothing to do with this, it's entirely about the GPU. 7th generation Intel GPUs and 10xx or newer nVidia GPUs support the DRM that Netflix requires, the CPU just needs to be fast enough to handle its part of the equation.
Anyway, speaking of unique identifiers in mobile devices, mobile phones have had IMEIs for ages - pre-dates Apple by a long time.
> Intel has revealed that each Pentium III chip will carry a unique serial number that can be read by the computer's software. Intel claims that the serial number will facilitate e-commerce, promote "digital content protection," prevent counterfeiting of Intel processors, and help to track stolen ones. We know users have questions about this controversial feature, so we assembled this FAQ. Q: Why are privacy experts concerned? A: Privacy experts are concerned because the CPU's electronic serial number could be used for purposes that may not be in users' best interests.
The internet was faster then too..
What a beautiful world that was
Absolute game changer for reverse engineering. It was like leveling up three levels instantly.
Now it is the norm.
RLM is a slightly more modern alternative to FlexLM which some software has moved to over the past 10 years from FlexLM.
[1] https://web.archive.org/web/20010424155417/http://www.woodma...
Or home users, in which software doesn't expire, just updates and support does. More a terminology aspect in that way.
Today more things are subscription level, but mostly for content to drive that software.
Seems so quaint to think of that as a privacy concern.
We overestimate the change over the course of a year, but seriously underestimate it over a couple of decades.
The primary privacy protection for communications in 1999 was legal, not technical:
1. Police needed a warrant to listen on your communications (or, if they only wanted to know who you were calling, no warrant was needed)
2. Private wiretapping would land you in jail, and required covert access to someone's house, making it riskier to pull off
3. Analog telephone systems (already out-of-date) were entirely protected by a law that made it illegal to provide consumer-grade equipment that could be easily modified to tune to 800mhz. This is still law today, despite the frequencies being unused for analog, and is a thorn in the side of amateur radio.
This could be summed up as "we promise not to spy on you if you promise not to resist us if we change our mind".
An interesting expression of this idea happened with the whole NSA Clipper Chip debacle. Effectively, the US government wanted to move from unencrypted everything to key-escrow encryption, where private citizens would be technically prevented from wiretapping your phones, but law enforcement could still do so. It failed so hard that the US government just stopped regulating crypto export and the NSA retreated to slipping vulnerabilities in crypto standards (e.g. Dual_EC_DRBG, TLS Extended Random).
The actual legal protections I mentioned above melted away under the heat of the War on Terror. The US government adopted a classified interpretation of wiretapping law that boiled down to "if we aren't listening, we haven't spied on you". Effectively, the NSA would wiretap everything and store it securely, and then once they had legal justification to actually wiretap you, they'd open up what they had already recorded. In theory, this is just turning a wiretap order into one issued about 30 days ago. In practice, the "legal justification" part was someone filling out a form in XKeyscore and clicking a button, with no further verification in the vast majority of cases.
It was only after much of this leaked - twice, I might add - that people outside of encryption enthusiast communities actually started taking technical privacy protections seriously. Things like end-to-end encrypted messaging, Let's Encrypt, and efficient cipher implementations that actually made encrypting everything useful are things that people in 1999 could only dream of (except for the above-mentioned cypherpunks). On the other hand, all of this extra security is fundamentally reactionary. We would not be encrypting the whole web were it not for certain Nation-State Actors abandoning their already-flimsy legal protections and going for full-take.
Of course, none of that matters when you were just going to self-surveil and post everything you do to Facebook anyway. But that isn't really all that new. People have always been bad at keeping secrets, advertisers have always been spying on you (before the Internet), and there's hundreds of years of legal precedent concerning when, where, and how much privacy you lose when you open your mouth or go out in public. If there is a difference between the 90s and today, it's that today's technology makes you a lot more aware of when your privacy has been violated. Target may know when you're pregnant before your father does, but Facebook will brag about it to you.
https://en.m.wikipedia.org/wiki/German_tank_problem
Further, over time, many authorities have to be repeatedly reminded that User-Agents with UUID's != the user themselves, and every attempt by technologists to cram more UUID on more and more closely held technology with more and more ubiquitous and trending toward always on data streams just makes this threat harder and harder to play down.
You don't need to serialize and track everything. We need to stop doing it. This is also why the systemd machine-id was a step in the wrong direction.