Additionally, regardless of the OS you run, Macbooks aren't affected by the Security Level/SPI flash hacks they came up with to disable Thunderbolt security.
https://christian.kellner.me/2019/07/09/bolt-0-8-with-suppor...
"THUNDERBOLT IS HOPELESSLY INSECURE AND BROKEN!!"
blah
blah
blah
blah
* except on 90% of computers shipping with Thunderbolt.
Windows PC makers were much later to TB3 and even now only ship it on a small percentage of their computers. I'm not even sure there is a Linux out of the box system with TB3 support.
> All the attacker needs is 5 minutes alone with the computer, a screwdriver, and some easily portable hardware.
Just started reading, but the comparison is already a little bizarre. It almost seems like the digital version of "This murderer is on the loose and you're in danger! He doesn't need to inject poison into your food. All he needs is just 5 minutes in front of you with a knife!"
To use your analogy, in the former case, the murderer poisoned your food at the grocer's, and you unwittingly dose yourself when you make your meal. In the latter case, the murderer spends time getting to know you and letting you trust them, and then one day, when you go to the bathroom, they come in and shoot you like Vincent Vega.
[0] https://en.wikipedia.org/wiki/USB_flash_drive#BadUSB
That being said, malicious hardware is a problem. A hacked phone charging terminal at the airport could certainly be a serious problem if there are enough vulnerabilities in the USB stack.
People frequently say this, but never really explain it. As far as I can tell, it translates to "Nobody cares about physical security" - except it's clear that people /do/. Things like Boot Guard are only really relevant to physical attacks. DMA protection in firmware is only really relevant to physical attacks. It's extremely obvious that the industry is attempting to avoid short term physical access to a device being sufficient to compromise it, and research that demonstrates that it's still possible is valuable.
That's a different kind of attack than what people usually mean by "physical access" though. The thing where they drop a bunch of malicious flash drives in the parking lot or put a malicious USB charger in an airport isn't the same thing as the attacker having unsupervised physical access to the machine, and the former is certainly worth defending against even if the latter is hopeless.
> Things like Boot Guard are only really relevant to physical attacks.
One could argue that they are also relevant to purposely locking the device owner into specific operating systems.
As an example of "physical access and you're screwed," one way to compromise a machine is to install a microphone anywhere near the machine and then wait for the user to type their passphrase. It's possible to deduce what keys are being pressed from the sounds they make and the timing, so now the attacker has your passphrase. The same can be done with covert video surveillance.
Another possibility is to measure electromagnetic emissions to much the same effect. Most computer keyboards are not exactly TEMPEST certified and even if they were, someone with physical access could make adverse modifications.
Protecting a machine against unsophisticated attackers is pretty easy, to the point that the likes of Boot Guard are not even required, but protecting a machine against physical access by a sophisticated attacker is pretty hopeless.
An extreme example a pentester imparted to me once was, if someone could spend sufficient time alone with my laptop, they could remove my hard drive and insert it into an identical laptop with a hardware or firmware backdoor preinstalled. We were discussing nation-state adversaries, but the general principle applies.
Another example is attacks on encrypted drives (so-called "evil maid" attacks). If a computer is booted and the drive is decrypted, an attacker with physical access could open the computer, remove the RAM, and download it's contents, thereby stealing the encryption key. If the computer is powered down, it's still vulnerable to other attacks; enrypted drives necessarily have cleartext code for accepting the password & decrypting the drive. You could modify this code to log the decryption key, or broadcast it over your device's radios.
There's also the classic Windows "sticky key" exploit, where you replace the sticky key binary with a program that gives you administrator access, reboot the computer, and then activate sticky keys.
You could install a keystroke logger. You could install a device to record monitor output. You could log network traffic.
I've yet to find a kiosk environment that I couldn't break out of. Once I was able to break out of a scanning kiosk environment, and into a Windows desktop, by turning the quality settings all the way up and crashing the kiosk. That was one of the more difficult examples; most of the time all you need is to find a way to right-click. (I had the proper authority to investigate these kiosks.)
The point is that the list goes on.
It is true, as you say, that there has been progress in implementing mitigations, and that there are people who care deeply about these issues. A counterexample might be SIM cards, TPMs, and other HSMs. These systems are able to provide better guarantees by encapsulating their peripherals and being willing to self destruct. But that could describe a cell phone, tablet a laptop, too.
Maybe in the future this "law" won't be so hard and fast.
All of this is a bit silly though, because physical intervention implies a level of commitment that lends itself to more reliable approaches: https://xkcd.com/538/
If you follow defense in depth as a security architecture philosophy, which the industry does, then you still implement defenses against physical attacks, but you recognize that those defenses are either (1) defenses against opportunists, or (2) last ditch defenses.
But many do and it’s a difficult problem that impacts the efficiency of the business. I’ve had to deal with it often and end of the day, you need to keep important data off of mobile or other client devices, and have controlled workarounds for exceptions.
Some of the tougher compliance standards recognize this and essentially prohibit many types of remote access without the entity owning the remote computer.
Like so: https://youtu.be/BKorP55Aqvg
Device identifiers and capabilities are not bound to the security level secret values. Drop off a pre-cloned video adapter in a conference room. If it is used and as a result authorized by a targeted computer at a later moment in time, it's game over. An attacker may now perform DMA operations unless the system has kDMA protection enabled. This requires kDMA support in the BIOS, IOMMU hardware, and in the Operating System.
The focus on DMA is however missing a very important observation about security levels from the research: There is a lot of attack surface when you're able to plug in a PCI(e) device as easily as a USB disk.
Let's break down each of the "vulnerability".
1. "However, we have found authenticity is not verified at boot time, upon connecting the device, or at any later point." This is actually false. Like, the author either didn't experiment properly or is lying/purposely misleading you. The firmware IS verified at boot for Alpine Ridge and Titan Ridge (Intel's TB3 controllers). They aren't for older controllers which does NOT support TB3. When verification fails, the controller falls back into a "safe mode" which does NOT run the firmware code for any of the ARC processors in the Ridge controller (there are a handful of processors where the firmware contains compressed code for). I'm willing to bet the author did not manage to reverse engineer the proprietary Huffman compression the firmware uses and therefore couldn't have loaded their own firmware. Because if they did, it wouldn't have worked. Now the RSA signature verification scheme they use to verify the firmware does suffer from some weaknesses but afaik doesn't lead to arbitrary code execution (on any of the Ridge ARC processors). I would love to be proven wrong here with real evidence though ;)
2. Basically the string identifiers inside the firmware isn't signed/verified. This has no security implications beyond you can spoof identifiers and make the string "pwned" appear in system details when you plug the device in and authenticate it. Basically if you've ever developed custom USB devices you can see how silly this is as a "vulnerability."
3. This is literally the same as #2.
4. Yes, TB2 is vulnerable to many DMA attacks as demonstrated in the past. Yes, TB3 has a TB2 compatibility mode. Yes, that means the same vulnerabilities exist in compatibility mode which is why you can disable it.
5. This one is technically true. If you open the case up, and flash the SPI chip containing the TB3 firmware, you can patch the security level set in BIOS and do stuff like re-enable TB2 if the user disabled it. But if I were the attacker, I would instead look at the SPI chip right next to it containing the UEFI firmware and NVRAM variables (most of which aren't signed/encryption in any modern PC).
6. SPI chips have interfaces for writing, erasing, and locking. If you have direct access to the chip you can abuse these pins to permanently brick the device. Here's another way: take your screwdriver and jam it into the computer.
7. Apple does not enable TB3 security features on Boot Camp. I guess this one is vaguely the only real "vulnerability" although it's well known and Apple doesn't care much about Windows security anyways (they don't enable Intel Boot Guard or BIOS Guard or TPM or any other Intel/Microsoft security feature).
Not that it matters but my personal experience with TB3 is that I've done significant reverse engineering of the Ridge controllers for the Hackintosh community.
Boot Guard makes that impractical in most cases. The point here is that on machines that don't implement kernel DMA protection, you're able to drop the Thunderbolt config to the lowest security level and then write-protect the Thunderbolt SPI so the system firmware can't re-enable it, making it easier to perform a DMA attack over Thunderbolt and sidestep the Boot Guard protections.
This isn't a world-ending vulnerability, but it's of interest to anyone who has physical attacks as part of their threat model.
Hi, I'm the author of Thunderspy. I'll restrict myself to answering your first point.
There appears to be a misunderstanding. The first vulnerability we found is 'Inadequate firmware verification schemes'. We do not claim a general ability to run arbitrary code on the Thunderbolt controller. Rather, we found that the signature does not cover the data in the SPI flash essential for Thunderbolt security. We've released tools that allow you to modify the SPI flash contents without changing the parts of the firmware covered by the signature (see [1], exploitation scenario 3.2.1 in the report [2], and the PoC video [3] that matches the latter scenario). This is how it is possible to read and modify device strings, uuid, and secret values. The steps for doing specifically the latter are detailed in exploitation scenarios 3.1.1, 3.1.2 and 3.1.3. Please let me know where you got stuck.
[1] https://github.com/BjornRuytenberg/tcfp [2] https://thunderspy.io/assets/reports/breaking-thunderbolt-se... [3] https://www.youtube.com/watch?v=7uvSZA1F9os
The section "3.1.3 Cloning victim device including challenge-response keys (SL2)" does not require flashing the victim system, it only requires reading flash from victim device which seems lesser hurdle.
I'm not going to feel safe charging with a public use charger until I find some way to insure only power and not data is making it to my device. Even POE feels like it's safer than modern peripheral standards right now.
(I admit this might not be perfectly linked to the article, it's just a need I've felt for a while but I can't seem to buy a solution for.)
As another commenter mentioned, I too carry a battery for longer trips, which is useful for charging devices when physically on the move and away from outlets. The model I chose from Anker can also top up a MBP, albeit slowly.
If you ever plug in a charging cable and get the prompt, you know something is wrong.
https://www.vice.com/en_us/article/akw558/apples-t2-security...
I guess MacBook resellers sometimes get computers where the password has been set and they can't get into the computers. I imagine they would be motivated to find anyway they can to unlock the computers.
I thought that this had changed with USB-C?!
The website is highly self-promoting.
This is rapidly starting to become less true - full disk encryption is everywhere, backed by hardware TPMs; the Lockdown LSM prevents root from owing the boot chain; devices with soldered RAM are functionally immune to cold boot attacks.
There are still things an attacker can do - put a hardware keylogger on the keyboard wires, a skimmer on the fingerprint reader - but that requires future input from the victim. It is feasible today to defend against a physical attacker if you have the right hardware upfront and don't use it after the attack.
Unfortunately, both for right-to-repair and actually owning the hardware you bought.
The point still is that if the attacker has unencumbered access to your device then indeed _further_ use of the device is unrecommended to say the least. It doesn't matter if you had or did not have full disk encryption. It does not matter if you had or did not have Thunderbolt.
An extremely low tech solution would be to place a smallish and tactically hidden camera on the chassis, you don't even need the screwdriver for that. And it just happens all the time on ATMs and I'd bet that like on ATMs it would fool a shitton of people.
And this story is precisely about the type of attack that "requires further user input" -- what would be the point of requiring Thunderbolt at all in the first place if you already have the system in pieces?
What? FDE is all symmetric crypto, long since 256-bit, and I think all AES. AES is extremely well understood, and the threat scenario for FDE is also purely cold attacks so even any side channels are irrelevant. I've never seen any feasible attack suggested even in principle, so I'm curious what you have in mind in 10-30 years. If you're thinking "quantum computers", you've gotten confused. Against symmetric keys those only provide at best square root(n) speed up via Grover's Algorithm, essentially halving the key size space. But 128-bit is still infeasible to search, and it'd be trivial to counter anyway by doubling the key length. It's only against current asymmetric cryptosystems that Shor's Algorithm can apply in principle (if if Big-If an actual scalable general purpose QC can actually be built).