Or, framed differently, if Intel or AMD announced a new gamer CPU tomorrow that was 3x faster in most games but utterly unsafe against all Meltdown/Spectre-class vulns, how fast do you think they'd sell out?
Can you elaborate on that? It sounds interesting
I was always morbidly curious about programming those, but never to the point of actually buying one, and I always had more things to do in the day than time in past life when we had a few of the cards in my office.
Well, many people have gaming computers, they won't use for anything serious. So I would also buy it. And in restricted gaming consoles, I suppose the risk is not too high?
[1]:https://www.club386.com/assassins-creed-shadows-drm-wants-to...
Given that we have effectively two browser platforms (Chromium and Firefox) and two operating systems to contend with (Linux and Windows), it seems entirely tractable to get the security sensitive threads scheduled to the "S cores".
So I think it should be the javascript that should run on these hypothetical cores.
Though perhaps a few other operations might choose to use them as well.
SINGLE thread best and worst case have to be the same to avoid speculation...
However for threads from completely unrelated domains could be run instead, if ready. Most likely the 'next' thread on the same unit, and worry about repacking free slots the next time the schedule runs.
++ Added ++
It might be possible to have operations that don't cross security boundaries have different performance as operations within a program's space.
An 'enhanced' level of protection for threads running a VM like guest code segment (such as browsers) might also be offered that avoids higher speculation operations.
Any operation similar to a segmentation fault relative to that thread's allowed memory accesses could result in forfeit of it's timeslice. Which would only leak what it should already know anyway, what memory it's allowed to access. Not the content of other memory segments.
This introduces HT side channel vulnerabilities. You would have to static partition all caches and branch predictors.
Also this is more or less how GPUs work. It is great for high throughput code, terrible for latency sensitive code.
10-20 min, depending on how many they make :)
I do realize that gamers aren't the most logical bunch, but aren't most games GPU-bound nowadays?
Which makes it kind of terrible that the kernel has these mitigations turned on by default, stealing somewhere in the neighborhood of 20-60% of performance on older gen hardware, just because the kernel has to roll with "one size fits all" defaults.
If you don't know what kernel parameters are and what do they affect, it's likely safer to go with all the mitigations enabled by default :-|
I think things will only shift once we have systems they ship with fully sandboxes that are minimally optimized and fully isolated. Until then we are forced to assume the worst.
This should be fun, however, for someone with enough time to chase down and try and find the bug. Depending on the consequences of the bug and the conditions under which it hits, maybe you could even write an exploit (either going from JavaScript to the browser or from user mode to the kernel) with it :) Though, I strongly suspect that reverse engineering and weaponizing the bug without any insider knowledge will be exceedingly difficult. And, anyways, there's also a decent chance this issue just leads to a hang/livelock/MCE which would make it pointless to exploit.
"Attacks only get better."
So as long as stuff is not perfectly isolated from each other then there's always a room for a bad actor to snoop on stuff.