Intel was also planning to wait for at least another 6 months before bringing this to light if it wasn't for the researchers threatening to release the details in May.
Source in the dutch interview: https://www.nrc.nl/nieuws/2019/05/14/hackers-mikken-op-het-i...
In this case the practice of responsible disclosure has been turned on its head. There should no longer be any responsible disclosure with Intel as long as they do not commit to changing their behavior.
The way Intel has been handling these security issues, I am going to avoid buying Intel whenever possible moving forward, regardless of if they have slight performance or power gains over competitors. The way to speak negatively toward corporate governance in this case is to vote with my wallet.
Wtf does that mean exactly? Do the patches and microcode work or do they not? I expect the truth to come out as OSS maintainers come out of embargo and others analyze the patches. But it sure looks like VM's on your favorite cloud provider will still be vulnerable in some ways because they're not turning off HT.
Wired has many details of your Dutch link in English. https://www.wired.com/story/intel-mds-attack-speculative-exe...
Intel pressuring vendors to not recommend disabling hyper threading? Apple has added the option to MacOs, so presumably the mitigations are not completely effective: https://www.theregister.co.uk/2019/05/14/intel_hyper_threadi...
Of course, until the legally agreed date when they can dump shares so there’s no obvious proof that it’s insider trading. Isn’t that what (then) Intel CEO Brian Krzanich did after Meltdown/Spectre?
What about devices with older processors? I'm still running a Sandy Bridge rig and it works fine, except for the side channel vulnerablities. It's probably not going to be patched. I also have a cheaper computer with a Skylake processor, which is newer yet still vulnerable!
It's only a matter of time until something really nasty comes along, making all these PCs dangerous to use. What then? Lawsuits?
My questions are only partially rhetorical.
The important thing to realize is that speculation and caching and such were invented for performance reasons, and without them, modern computers would be 10x-100x slower. There's a fundamental tradeoff where the CPU could wait for all TLB/permissions checks (increased load latency!), deterministically return data with the same latency for all loads (no caching!), never execute past a branch (no branch prediction!), etc., but it historically has done all these things because the realistic possibility of side-channel attacks never occurred to most microarchitects. Everyone considered designs correct because the architectural result obeyed the restrictions (the final architectural state contained no trace of the bad speculation). Spectre/Meltdown, which leak the speculative information via the cache side-channel, completely blindsided the architecture community; it wasn't just one incompetent company.
The safest bet now for the best security is probably to stick to in-order CPUs (e.g., older ARM SoCs) -- then there's still a side-channel via cache interference, but this is less bad than all the intra-core side channels.
These vulnerabilites and Meltdown allow untrusted code to speculatively access data that it shouldn't be allowed to access at all, and use that speculative access to leak data itself. Unlike Spectre this can be (and to some extent has to be) fixed at the hardware level, because the hardware itself is failing to protect sensitive data. This class of vulnerability seems to have been mostly Intel-exclusive so far (with the main exception being one unreleased ARM chip that was vulnerable to Meltdown). There's nothing inherent about modern high-performance CPUs that requires them to be designed this way.
Edit: This slipped my mind, but Foreshadow / Level 1 Terminal Fault was yet another similar Intel-only processor vulnerability that allowed speculative access to data the current process should not be able to access. It's definitely a pattern.
Also, no need for older SoCs. High end ARM chips have both in-order and out-of-order cores but the cheaper ones have in-order ones only. A Snapdragon 450 is pretty modern and doesn't speculate deeply enough to be vulnerable to Spectre.
In the x86 space, Meltdown absolutely was down to one company apparently deciding to over-optimize for performance.
I can't find it now, but I remember reading a thread from (I think) the OpenBSD devs about how the Intel MMU documentation described fairly sane behaviours and how far the reality deviated from the documentation.
I wondered aloud if it wouldn’t be better for use to embrace NUMA, make the bigger caches directly addressable as working memory instead of using them as cache.
out-of-order execution != speculative execution.
It is possible to have OoO without speculative execution. On the other hand they do tend to come as a pair since they both utilise multiple execution units, for instance the Intel Pentium in 1993 was the first x86 to have OoO or branch prediction (486 and those before were scalar CPUs).
So it would be safest to execute them on separate CPUs not sharing a common cache, e.g. pinning them to different CPU sockets on a multi-socket machine, or to different physical machines altogether.
This may be still faster than running on old ARMs.
I wonder if dedicated cloud boxes, where all VMs you spin on a particular physical machine belong only to your account, will become available from major cloud providers any time soon. In such a setup, you don't need all the ZombieLoad / Meltdown / Spectre mitigations if you trust (have written) all the code you're running, so you can run faster.
The "elephant in the room" with all these attacks starting from Spectre/Meltdown is that an attacker has to run code on your machine to be able to exploit them at all.
To the average user, the biggest risk of all these side-channels is JS running in the browser, and that is quite effectively prevented by careful whitelisting.
As you can probably tell, I'm really not all that concerned about these sidechannels on my own machines, because I already don't download and run random executables (the consequences of doing that are already worse than this sidechannel would allow...), nor let every site I visit run JS (not even HN, in case you're wondering --- it doesn't need JS to be usable.)
There is absolutely nothing to be done on our level about this.
I'm fairly convinced this is systemic issue that can only be solved by redesigning almost entirely modern cpus and computers architecture.
I can draw a parallel to approximately all Intel cpus which are know to have a dedicated "mini cpu" called "Minix" which is an absolute "black box" and have been found to be vulnerable for to a wide variety of attack for nearly decades...
Not only we need to redesign computers and cpu architecture but we desperately need to make that entire process and knowledge open source , available to all and more transparent.
Today this entire knowledge is the hand of few gigantic corps whom are keeping it to ensure their monopolistic position.
Here's hoping OpenRISC takes off!
What exactly is "our level"?
Or, if that is too complex, there could just be a bunch of breadboarded ICs capable of networking. There actually are real-world examples of such machines exposed via telnet, e.g. http://www.homebrewcpu.com
And all the performance beasts, while surely indispensable, could just enjoy their air-gapped solitude. Are there any massively parallel supercomputing tasks whose results couldn't be summarized and reduced to mere text, which is not too hard to move over the airgap manually?
Hitting diminishing returns on caches with an ever increasing gate budget is I believe prompted this second generation starting in the 1990s which adds speculative execution, which the algorithm with all those extra hidden registers really invites you to do.
Probably it's smart to not see a computer not like a walled garden but more like a sieve.
None of that helps with your public cloud workloads.
https://aws.amazon.com/ec2/amd/
https://azure.microsoft.com/en-us/blog/announcing-the-lv2-se...
But as an Intel consumer, I am not happy. My understanding is that more stuff can be fixed in microcode, but I suppose a bug could show up which was not practically fixable. If that happened, I would certainly sue or join a class-action lawsuit. Probably the class-action route, because even if I didn't get any thing, I would be just mad enough at Intel to want them to suffer.
Of course, we do have consumer protection agencies; it is possible that they would step if Intel had sold what would effectively be a defective product.
It doesn't have anything do to with how large Intel is. They have clearly made a more aggressive hardware design which has more corner cases to break. The designs are broken and microcode can patch some variants of these side channels but the overhead is becoming a problem.
In this case it's not certain if microcode can address the problem but if it can't, disabling of SMT (hyperthreading) can be a significant cost for some workloads (well above 10% for things that haven't been specifically tuned to avoid cache misses, which is most software in my experience).
* Core and Xeon CPUs affected, others apparently not.
* HT on or off, any kind of virtualization, and even SGX are penetrable.
* Not OS-specific, apparently.
* Sample code provided.
https://www.cyberus-technology.de/posts/2019-05-14-zombieloa...
Essentially: Intel released a microcode update which makes the `verw` instruction now magically flush MDS-affected buffers. On vulerable CPUs, this instruction now needs to be run on kernel exit; the microcode update won’t do it automatically on `sysexit`, unfortunately.
I do agree that this won't end soon though. It appears to me that many of the methods CPU's use for better performance are fundamentally flawed in their security, and it's not like we can expect the millions of affected machines to be upgraded to mitigate this.
My biggest worry is that all currently known classical "secure" data sets, including encrypted but recorded internet communication, will become an open book a few decades from now. What insights will the powers that be choose draw from it then, and how will that impact our future society? Food for thought.
Security _ONLY_ through obscurity is not security. Obscurity is a perfectly valid layer to add to a system to help improve your overall security.
from here: https://support.apple.com/en-us/HT210107
If you choose not to disable HT, you stay vulnerable even with updated microcode, right?
In any case, Apple's stats are much more gruesome...
How will your "userland core" switch to other userland programs safely? A pointer-dereference can be a MMap'd file, so its actually I/O. This will cause the userland program to enter kernel-mode to interact with the hardware (yes, on code as simple as blah = (this->next)... the -> is a pointer dereference potentially in a mmap'd space to a file).
So right there, you need to switch to kernel mode to complete the file-read (across pointer-dereference). So what, you have a semaphore and linearize it to the kernel?
So now you only have one-core doing all system level functions? That's grossly inefficient. Etc. etc. I don't think your design could work.
User PU would stall on the "outermost" return and wait for another dispatch by kernel PU; it would also stall during context switches.
I don't know enough about the IaaS market to know what the relative revenues of low-end compute vs. medium-to-high-end compute are for your average vendor, though. Is most of the profit in the low end?
I'm also curious on what the impact on margins would be if IaaS vendors decided to switch away from serving the low-end compute demand with "a few expensive high-power Intel cores per board, each multitasking many vCPUs", to serving the demand with "tons of cheap low-power ARM cores per board (per die?) with each core bound to one vCPU."
Sounds a lot like IMB’s Cell architecture.
Just shows the hoops and tricks needed to keep making, on paper, faster processors year on year but without node shrinks to give headroom.
14nm++++ is played out.
To avoid reintroducing these spectre like bugs, you'd have to conservatively design the per-thread execution to avoid those covert channels. Not only synchronously enforcing all logical ISA guarantees for paging and other exception states, but also using more heavy-handed tagging methods to partition TLB, cache, etc. for separate protection domains.
And it will be tough as no compiler supports it, moreover C/C++ are architected from the beginnign to not bother with runtime information about object types/sizes.
https://aws.amazon.com/security/security-bulletins/AWS-2019-...
I've noticed some Thinkpads with AMD CPUs but I feel like I'm on virgin ground when it comes to AMD and their integrated GPU offerings.
I believe others OEMs are developing similar offerings as well but I can't find any quick links for newer SKUs like the Ryzen 7 3700U which offers the improved Zen+ revisions which will specifically improve battery life and heat issues.
There is a much larger market in 2019 for AMD laptops, so you should be able to find something to suite your needs.
Next year with the 7nm mobile chips will probably be much better.
https://www.huaweicentral.com/honor-magicbook-ryzen-7-versio...
The new 3700U model will probably be available on Aliexpress next month or so. I would consider it except Linux support is unknown and only 8GB of RAM.
Realistically the AMD laptops lack of color accuracy compared to shit like the trutone Intel macs, and lack of display connectivity especially compared to a thunderbolt outfitted x1 extreme that can connect to four 4k displays is a bigger problem. Especially since external displays actually need the resolution.
Spectre, on the other hand, is harder both to fix in hardware and to attack because the victim context is itself tricked into speculatively executing code using attacker-supplied data that leaks information - it uses inherent properties of speculative execution rather than any kind of hardware bug, but it's only exploitable if there's some victim code that does exactly the right kind of processing on attacker-supplied data.
If that's the case, my CPUs are likely pinned to my VM. I could still have evil userland apps spying on my own VM, but I would not expect this to allow other VMs to spy on mine.
Not to these vulnerabilities. These are attacking memory that is "in flight" within a processor.
This is the terminology that Intel itself uses in its documentation to describe its products, though. To be fair, they say "physical processor" and "logical processor", not "core".
most of this seems to be behaving as intended, they just didn't foresee the side channels this opens up
What are they saying here?
> ...said it works “just like” in PCs
The number of mistakes in the Techcrunch article is atrocious.
Keep in mind, this article was posted at 3am pacific, 6am eastern. Assuming the author is in North America, he was probably under a deadline and rather tired.
I have found similar typos from prominent writers. Sometimes they email me back, which I appreciate.
I found one in an article by Cory Doctorow on boingboing. I checked on builtwith.com, and they use WordPress/Jetpack. Jetpack has a feature that will warn you if you try and publish something with spelling mistakes, it is just not enabled by default.
I let Mr Doctorow know all this, in a very polite manner, and he responded with "many thanks". I'm not big on celebrity worship, but it still made my day.
We merged that HN thread (https://news.ycombinator.com/item?id=19911465) into this one, via this other one (https://news.ycombinator.com/item?id=19911715).
I also note that the provided OSes are being updated with mitigations as well, so for complete mitigation of the issue you'll probably need to update your OS.
[1] https://aws.amazon.com/security/security-bulletins/AWS-2019-...
[2] https://cloud.google.com/compute/docs/security-bulletins#201...
[3] https://www.intel.com/content/www/us/en/security-center/advi...
> Chip designers are under so much pressure to deliver ever-faster CPUs that they’ll risk changing the meaning of your program, and possibly break it, in order to make it run faster.
> ...
> applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chip’s potential throughput.
It seems the trend in programming languages is towards better concurrency support. But why don't we yet see 100-core chips? If chip makers had to forego all speculative execution and similar tricks, would that push us toward the many-core future?
https://twitter.com/IanColdwater/status/1128395135702585347?...
https://software.intel.com/security-software-guidance/softwa...
Hi. Am trying to understand what you meant here. That both "verw" or "l1d_flush followed by an lfence" are faster than "mfence" which you implemented in safelibc?
If so, why didn't you use these faster options yourself? My understanding was that these faster options needed to be handled at the hypervisor/kernel level, rather than in libc. If so, how is the attitude of glibc maintainers relevant?
Is that a reflection of engineering differences or a statistical byproduct of the market share of Intel CPUs?
I run AMD not because of the security implications but because I feel every dollar that goes to Intel competition will push Intel and thus the entire industry forward.
So the cloud vendors are 97% minimum Intel, they're exquisitely vulnerable both technically and reputationally to these bugs, the stakes are existential for them and they have a lot of money they can throw at the problem, whereas the users of notebooks and desktops are a much more diffuse interest.
As I've mentioned many times in these discussions today, everyone had Spectre issues, and everyone but AMD has Meltdown ones. The more recent vulnerabilities are Intel only because they're using what was learned from those first two to attack Intel specific features like the SGX enclave.
> The safest workaround to prevent this extremely powerful attack is running trusted and untrusted applications on different physical machines.
Nope!
> If this is not feasible in given contexts, disabling Hyperthreading completely represents the safest mitigation.
Nope!
Shrugs?
That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged. It's unfortunate that their demo doesn't itself run in the browser using JS (I don't know if it's possible), because that's closer to what people might think of as "unprivileged".
The attacker has no control over the address from which data is leaked, therefore it is necessary to know when the victim application handles the interesting data.
This is a very important point that all the Spectre/Meltdown-originated side-channels have in common, so I think it deserves more attention: there's a huge difference between being able to read some random data (theoretically, a leak) and it being actionable (practically, to exploit it); of course as mentioned in the article there are certain data which has patterns, but things like encryption keys tend to be pretty much random --- and then there's the question of what exactly that key is protecting. Let's say you did manage to correctly read a whole TLS session key --- what are you going to do with it? How are you going to get access to the network traffic it's protecting? You have just as much chance that this same exploit will leak the bytes of that before it's encrypted, so the ability to do something "attackful" is still rather limited.
Even the data which has patterns, like the mentioned credit card numbers, still needs some other associated data (cardholder name, PIN, etc.) in order to actually be usable.
The unpredictability of what you get, and the speed at which you can read (the demo shows 31 seconds to read 12 bytes), IMHO leads to a situation where getting all the pieces to line up just right for one specific victim is a huge effort, and because it's timing-based, any small change in the environment could easily "shift the sand" and result in reading something entirely different from what you had planned with all the careful setup you did.
Using ZombieLoad as a covert channel, two VMs could communicate with each other even in scenarios where they are configured in a way that forbids direct interaction between them.
IMHO that example is stretching things a bit, because it's already possible to "signal" between VMs by using indicators as crude as CPU or disk usage --- all one VM has to do to "write" is "pulse" the CPU or disk usage in whatever pattern it wants, modulating it with the data it wants to send, and the other one can "read" just by timing how long operations take. Anyone who has ever experienced things like "this machine is more responsive now, I guess the build I was doing in the background is finished" has seen this simple side-channel in action.
I always interpreted "privileged" to mean "superuser". I.e. unrestricted. Or possibly the case of one user and another user. Having a program that can determine the URL you are visiting in the browser from memory when running as the same user is a different class than something that can do the same when run as any non-root user on the system. There's a reason it's common to "drop privileges" in a daemon after any initial setup that requires those privileges (such as binding to a low port).
If you're in a VM, you have no privileges over the host CPU, you can't switch to another VM or to the host itself. That's what's meant by unprivileged here.
"""> Virtualization seems to have a lot of security benefits.
You've been smoking something really mind altering, and I think you should share it.
x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.
You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.
You've seen something on the shelf, and it has all sorts of pretty colours, and you've bought it.
That's all x86 virtualization is. """
[1] http://www.tylerkrpata.com/2007/10/theo-de-raadt-on-x86-virt...
[2] https://www.openbsd.org/faq/faq16.html
[1] probably isn't the best source out there, I was in a bit of a rush to find it but that is indeed the quote! Gotta either love or hate Theo I guess!
We will most likely see a continued divergence between "consumer silicon" which is designed for speed in a single tenant environment on your local desktop or laptop, and "cloud silicon" which is optimized to protect virtualization, be power efficient, etc. I'd predict that this will actually lead to increased efficiency and lower prices of cloud resources rather than the "death by a 1000 cuts" that you are proposing.
Even if there's no opportunity to switch away, eventually you bleed your customers dry and put them out of business. You will typically always aim to price your offering at the equilibrium point where loss of custom increases faster than the increase in profit, and vice versa.
One situation in which an increase in your costs can be good, is if the same increase applies more to your competition. But, in this case multi-tenant cloud is hit harder than the competing alternative of private infrastructure.
Am I totally misunderstanding this? Someone please correct me if I'm wrong.
I don't think you don't need to go this far. You can probably get away with circuit switching small blocks of hardware, and fully resetting them between handovers. Although you'd have to ensure sufficient randomisation / granularity to destroy side channels in the switching logic.
And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.
Running many instances of various untrusted code on the same client machine is "new": it came with web apps, and with mobile apps.
Before several years ago, it was sort of a non-issue, because to exploit such a vulnerability one would need to write a virus or a trojan, and with this approach, there are many easier ways of privilege escalation.
Something like "cloud" existed likely on IBM mainframes under OS/VM [1] but System/370-compatible CPUs likely lacked all these exploitable speculative execution features.
Time sharing was very big in the 1970s, and non-OS/VM methods of sharing mainframes for batch processing were also big at times I'm less sure of.
Inviting complete randoms to routinely run untrusted code in your own security domain, as we do with browsers, that's "new". And thus the popularity of NoScript and uMatrix.
Second, as I understand it, Spectre and Meltdown really started this whole parade because prior to those vulnerabilities, speculative execution attacks were something only academics ever talked about - everyone assumed it would be too difficult to pull off in the real world. When that received wisdom was proved wrong, it probably opened the floodgates for researchers - both in terms of intellectual interest and money.
Also, re: why Intel and not AMD... I think Intel is probably a higher-dollar target due to their dominance in the server market, but also probably because they have been neglecting QC for years... see, e.g., http://danluu.com/cpu-bugs/
Things have changed a lot since then: OS kernels became faster by eliminating a lot of unnecessary (?) cross-process overhead; browser makers made a number of potentially problematic decisions ("let's allow Javascript to create CPU threads — what could possibly go wrong?"); Linux kernel developers made few potentially problematic decisions ("let's allow unprivileged processes to invoke arbitrary BPF bytecode — that worked for Java, so what could possibly go wrong?")
A lot of small security lapses added up until it became viable to use CPU flaws to actually target ordinary users. To add insult to injury, certain corporations started spreading myth, that well-known insecure practices — such as knowingly running local software from questionable authors — are "safe enough" for general population. Topic web page even talks about running untrusted Android software, as if Android had some kind of impenetrable security boundary around untrusted apps.
It's a new vulnerability class. Prior to Spectre, nobody thought that code which didn't execute (and couldn't execute) could affect architectural state in an observable way. It's hard to overstate how bizarre the vulnerabilities from the Spectre family are from a software point of view: it's leaking data from code that not only didn't execute yet, but also can never execute, and in some cases doesn't even exist! It's like receiving a packet your future self sent to the past, except that your future self had been dead for two years when he sent the packet, and for some reason he's actually a parrot.
Once a new vulnerability class is discovered, researchers will start looking for new bugs in and around that class. Which is why we have seen lately so many issues disclosed around speculative execution and data leaked through shared microarchitectural state.
AMD are not better. They’re probably worse. They’ll be looked at when the Intel tree stops bearing fruit. But finding an Intel bug is higher impact, so that’s what researchers want to look at.
This argument makes zero sense and fails real world inspection. You see, researchers did publish the fact that it affected AMD on Spectre.
For this vulnerability, they were unable to reproduce the bug on AMD (their words). Which means that they tried.
The attacks are being attempted. AMD just didn't screw up as badly.
There is a home page about today's vulnerability disclosures at https://news.ycombinator.com/item?id=19911715. We're disentangling these threads so discussion can focus on what's specific about the two major discoveries. At least I think there are two.
https://www.theguardian.com/world/2014/jul/15/germany-typewr...
But assuming a typewriter has no attack vectors is just as foolish as insecure networks IMO.
https://arstechnica.com/information-technology/2015/10/how-s...
Also: detecting text through keystrokes previously discussed here https://news.ycombinator.com/item?id=7448976 (https://people.eecs.berkeley.edu/~tygar/papers/Keyboard_Acou...)
Heck while I can't find a quick source, I remember a story about how the CIA designs rooms/walls and buildings to prevent sound from predictably bouncing through rooms in ways that could be captured from afar.
Spooks are usually 10 steps ahead of the public common sense this sense.
https://www.dovermicrosystems.com/
Academics keep coming up with stuff for timing channels like partitioning, masking, and randomizing components. Personally, if not physical separation, I'd just do SMP with secret parts on different CPU that untrusted parts. Both memory safe on a separation kernel to isolate them. One design used different DIMM's, too.
These are already options, as another commenter pointed out. If you need this kind of protection, it is available, at significant cost.
P.S. the Holy Church of Progress keeps flagging the herecy of I-Ching out of existence, may it prevail in its glorious ways. Curious fact: expressing your disagreement in written form takes more neurons than flagging reflex does. Try and ye shall succeed!