I would expect Microsoft to handle security vulnerabilities with a higher priority. Not sure why they are dropping this on the floor.
The idea would be that if you found a vulnerability in a product whose vendor was likely to pour more money into gag orders and legal threats than into fixing the vulnerability, you would publish the vulnerability encrypted in such a way that it would take several years of continuous computation to get the decryption key. Legal threats and/or general foot dragging couldn't put the cat back in the bag.
Sometimes I regret not publishing the tool.
Microsoft has the resources to fix these; I'm not sure what their excuse is (and it may be valid), but vulnerabilities like this should take highest priority.
I agree, and have to assume the time to fix is back testing and checking with big vendors/users if the fix inadvertently breaks something they were relying on. At this point how many windows bugs are now features set in stone and must be carried on in perpetuity because so much software has been built around the buggy behavior?
> "By manipulating a document's elements an attacker can force a dangling pointer to be reused after it has been freed. An attacker can leverage this vulnerability to execute code under the context of the current process,”
The first and second sentence there feels like an 'and then a miracle happens' argument (http://star.psy.ohio-state.edu/coglab/Miracle.html). I get that, in some cases dangling pointers might allow you to get a bit of uploaded data to be treated like a bit of internal data. But it seems to me like a piece of extraordinary unlikely bad luck to allow this to execute arbitrary code.
So I don't dismiss that there is a theoretical risk, but can anyone suggest how much risk is in these risks. In particular, is the risk of such an exploit greater than the risk of an exploiter finding a new weakness? If not, then I can understand why there is no great urgency to patch these flaws.
Because so many browser fuzzing crashes are UAFs, people have put a lot of effort into developing reliable techniques for exploiting them.
See e.g: http://www.rapid7.com/db/modules/exploit/windows/browser/ms1... for a reasonably reliable example.
Twenty years ago maybe this argument carried the day. Don't even consider using it today. The tooling, techniques, and skill are far higher than you could dream if you are not in this world.
This is not quite the same thing we are talking about, but let me give you a different example. An obscure cross-site-scripting attack is no big deal, right? Well, courtesy of BeEF [1], if the XSS can be leveraged to get you to download a script, which is a low bar, BeEF can then be used to proxy web access in, allowing an attacker to lever up from "small XSS" to "crawling your intranet with the internal credentials of the compromised user".
Yow!
Do not ever count on difficulty of exploit as a defense anymore. In many cases the reason why these people aren't providing off-the-shelf exploits for this sort of thing isn't that it's too difficult to make practical, it is that in the security world it is now too trivial to be worth spelling out. Attacker capabilities (and pen testing capabilities) have skyrocketed in the past ten years, but the defense team still for the most part is operating like it's 1995 and the idea that a program might be used on a network is still like some sort of major revelation.
(I'm on the defense side personally. It feels about like this: https://youtu.be/MPt7Kbj2YNM?t=2m11s In theory, I am powerful, in theory I control the field, in theory all the advantages should be mine, but....)
I also wasn't making any comment about 'banking on difficulty of exploit', what I was asking for was relative risk. I think that all code is exploitable. The question I had was, is the exploitation of a particular UAF bug sufficiently easy that it outweighs the base risk of a new exploit being found. If I have finite resources, understanding where to apply them to improve risk is important.
The other responses have answered my question in some detail.
If you get to an attacker controlled website, it shouldn't be that hard to pull off most of the time, though definitely not deterministic.
(Man, remind me to check that this isn't all horribly wrong after defcon...)
IE has been a complete debacle since its inception.