people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.
also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.
if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.
With skill, and usually not consistently and systematically. With AI, anyone can do this to any software.
> not sure shorter embargoes really help
Why 90 days versus 2 years? The author is arguing the factors that set that balance have shifted, given the frequency of simultaneous discovery. The embargo window isn’t an actual window, just an illusion, if the exploit is going to be found by several people outside the embargo anyway.
> cheaper exploit generation probably makes coordinated disclosure more important
I agree. But it also makes it less viable. If script kiddies can find and exploit zero days, the capacity to co-ordinate breaks down.
There was always a guild ethic that drove white-hate (EDIT: hat) culture. If the guild is broken, the ethic has nothing to stand on.
How do you know? If the people who like to crow about vulnerabilities aren't doing it, it doesn't mean that the people who are actually in a position to exploit them systematically and effectively aren't doing it.
Those embargoes have always been dangerous, because they create a false sense of security. But, as you point out...
> With AI, anyone can do this to any software.
Yep. Even if it hadn't been true before, it's clear that now you just have to assume that everybody relevant will immediately recognize the security impact of any patch that gets published. That includes both bugs fixed and bugs introduced.
... and as the AI gets better, you're going to have to assume that you don't even have to publish a patch. Or source code. Within way less time than it's going to take people to admit it and adjust, any vulnerability in any software available for inspection is going to be instant public knowledge. Or at least public among anybody who matters.
Shouldn't this naturally lead to a state where all (new) code is vulnerability-free? If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code.
We know because we could see the effects of the average rate of vulnerabilities discovery and exploitation, and it's definitely going up very fast. Until recently, vulnerabilities were relatively hard to find, and finding them was done by a very restricted group of people world-wide, which made them quite valuable. Not any more.
I would like to see actual evidence of this, not.. vibes
I mean, this reeks of "Anyone is a Principal developer now" when the truth is there is still work to do.
I haven't seen this kind of thing and I get the impression, despite all the hype, that this will be a frequent phenomenon now thanks to LLMs.
Sure the mechanics haven't changed, but the scale and risk has. The number of attackers capable of taking advantage of the vulnerability has dramatically increased.
So it's not surprising Dirtyfrag was disclosed by a fix in the Linux kernel. [2]
[1] https://www.zdnet.com/article/torvalds-criticises-the-securi...
Much of today's workloads are containerized and run on roughly ephemeral nodes that can be switched out easily- K8s version upgrades force this more or less. We tent to run more and more of-the shelf hardware and worry less about individual node failures now.
In-memory updates also not magic , and can be limited as they requires data structure semantics to not really change and can create its own class of issues/bugs including security ones.
While am sure there are still use cases which dictate this type of update, the need is lot less than 15 years ago that the patent expiry will do much to the ecosystem.