I get that companies sit on vulnerabilities, but isn't fair warning... fair?
You've got it backwards.
The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.
Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.
Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"
The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.
Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.
Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.
This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
> It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
You decided they are better off not having to make that choice, so you make it for them whether they like it or not.
In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.
> Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.
Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).
> This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.
I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.
If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.
Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.
For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.
If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.
Which is again, a problem created by the companies themselves. The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.
But instead companies refuse to tell their customers when they're at risk, and make it out to be the researchers that are endangering people, when those researchers don't wait on an arbitrary, open-ended future date.
> Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually.
Unless you know who knows what already, this is unprovable supposition (it could already be being exploited in the wild), and the arguments about whether POC code is good or bad is well tread, and covers this question.
You are just making the argument that obscurity is security, and it's not.
Instead of just one bad actor using that vulnerability on Andrew select targets, your proposal will have a few tens of thousands bots performing drive by attacks on millions of victims.
With current practice, you can be as sloppy and reckless as you want, and when you create vulnerabilities because of that, you somehow almost push the "responsibility" onto the person who discovers it, and you aren't discouraged from recklessness.
Personally, I think we need to keep the good part of responsible disclosure, but also phase in real penalties for the parties responsible for creating vulnerabilities that are exploited.
(A separate matter is the responsibility of parties that exploit the vulnerabilities. Some of those may warrant stronger criminal-judicial or military responses than they appear to receive.)
Ideal is a societal culture of responsibility, but in the US in some ways we've been conditioning people to be antisocial for decades, including by elevating some of the most greedy and arrogant to role models.
I have a problem with this framing. Sure, some vulnerabilities are the result of recklessness, and there’s clearly a problem to be solved when it comes to companies shipping obviously shoddy code.
But many vulnerabilities happen despite great care being taken to ship quality code. It is unfortunately the nature of the beast. A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.
To me, the issue is that software now runs the world, despite these inherent limitations of human developers and the process of software development. It’s deployed in ever more critical situations, despite the industry not having well defined and enforceable standards like you’d find in some engineering disciplines.
What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.
I still believe the industry has a problem that needs to be solved, and it needs a broad culture shift in the dev community, but disagree that shining a bright light on every hole such that it causes massive harm to “make devs accountable” is a good or even reasonable solution.
At this point, the software development field is about operating within the system decided by those others, with the goal of personally getting money.
After you've made the CEO and board accountable, I think dev culture will adapt almost immediately.
Beware of attempts to push engineering licensing or certifications, etc. as a solution here. Based on everything we've seen in the field in recent decades, that will just be used at the corporate level as a compliance letter-but-not-spirit tool to evade responsibility (as well as a moat to upstart competitors), and a vendor market opportunity for incompetent leeches.
First you make CEO and board accountable, and then let the dev culture change, and then, once you have a culture of people taking responsibility, then you'll have the foundation to add in licensing (designed in good faith) as an extra check on that, if that looks worthwhile.
Good. I work in code security/SBOM, the amount of shit software from entities that should otherwise be creating secure software should worry you.
Businesses care very little about security and far more about pushing the new feature fast. And why not, there is no real penalty for it.
I think as a field we're actually reasonably good at quantifying most of these risks and applying practices to reduce the risk. Once in a blue moon you do have "didn't see that coming" cases but those cause a very minor part of the damage that people suffer because of sw vulnerabilities. Most harm is caused by classes of vulnerabilities that are boringly pedestrian.
One hour is absurd for another reason, what timezone are you in? And they? What country, and therefore, is it a holiday?
You may say "but vulnerability", and yes. 100% no heel dragging.
But all companies are not staffed with 100k devs, and a few days, a week is a balance between letting every script kiddie know, and the potenital that it may be exploited in the wild currently.
If one is going to counter unreasonable stupidity, use reasonable sensibility. One hour is the same as no warning.
If the reason for responsible disclosure is to ensure that no members of the public is harmed as a result of said disclosure, should it not be a conversation between the security researcher and the company?
The security researcher should have an approx. idea of how or what to do to fix, and give a reasonable amount of time for a fix. If the fix ought to have been easy, then a short time should suffice, and vice versa.
And this is mostly BS too. People don't write bug free software, they write features.
Other industries had to license professional engineers to keep this kind of crap from being a regular issue.
Maybe it's time we get professional standards if this is how we are going to behave?