Properly designed bug bounty programs are a cornerstone to any company who remotely cares about the security of their product, period.
The idea of misaligned incentives due to poor bug reports being free to submit is ignorant - and worse toxic, because it sounds so true to an executive who has no actual understanding of the issue.
A quality bug report should take no more than 1 minute for a reviewer to look at and know if it's really a bug or not. If it can't, it should be rejected saying provide more clear details. For example a dom based xss attack could be reported with just a target URL and it is quite clear what the problem is. That would take 10 seconds to analyze.
Additionally, most bugs reported to most decent sized companies are reported by someone who has previously reported a bug to the company before. If someone is constantly reporting good bugs or the opposite, its quite easy to prioritize which of those individuals gets their emails read first.
http://blogs.msdn.com/b/oldnewthing/archive/2011/12/15/10247...
(I've never been on the receiving end of a security mailbox, so I have no personal testimony as to the reasonableness of this approach.)
The bugs expressed in this post, such as the duplicate account creation one, are not going to impact a company's bottom or top lines, so it is questionable whether it warrants a small merit award which is the idea that spawned this thread.
Now that I've sufficiently named my experience, allow me to give my side:
1. You will never receive $100,000 for selling a vulnerability in PayPal. You probably couldn't even find a buyer for it on the "black market." I have explained why repeatedly on Hacker News before, so I'm just going to link this: http://breakingbits.net/2015/04/01/understanding-vulnerabili...
2. Bug bounties are not always a net positive for an organization. They are also not a cornerstone of good security posture. A foundational focus on robust software security would start with various other things until the financials are worked out and there is someone knowledgable to read incoming reports.
Only 7% of submitted reported to companies for a responsible disclosure program are valid. This is especially true for paid programs, where the validity percentage often drops to 3% or 4%. Loads of people who know nothing about software security try to find bugs, desperate for the gold rush of bounties they see headlining places like HN. They submit spurious reports and as a result the signal to noise ratio of responsible disclosure is fantastically bad. What this means practically is that the average organization spends between 50 and 300 hours a month investigating incoming security reports.
You can quickly see how the cost adds up here. I'm not an executive or manager trying to cut costs - I've managed bug bounties for plenty of startups and Fortune 500 companies. I've also reported bugs that loads of people tell me I could have sold for "millions" of dollars - and received nothing for it.
I love bug bounties. I run them, I participate in them. But they can be a frivolous waste of time for development teams without a solid enough grasp of security to review incoming reports, and a waste of money in the worst case.
3. I'm sorry, but you lose credibility by claiming most security reports can be qualified in a minute or less. You can certainly throw out many in a similar time frame, perhaps five minutes, but real vulnerabilities? No.
If I report a server-side request forgery in your API that requires a very specific set of actions to occur in an obscure, undocumented application function, you will not qualify this quickly. Unless you are literally verifying a CSRF issue, it is completely unrealistic to assume this.
A race condition will not be qualified in a minute. A buffer overflow will not be qualified in a minute. Budget an hour per report, and be happy when you come across the reports that take you a few minutes. XSS and CSRF are comparatively simple to verify with a good report, yes, but most other classes aren't.
Let's add to this the folks who can find great vulns but write bad reports. No exploit code, but he found something real? Good luck verifying. I spoke to a fellow infosec engineer the other day and he told me he spent an entire morning out of his work day verifying a report that came in. Not patching or even triaging mind you - verifying. Most security teams do not have the olympic level efficiency and skillset diversity that Google's and Facebook's do - it is unreasonable to assume a report can or even should be verified quickly.
This is all to say that I believe your outlook is not consistent with reality, with all due respect. Bug bounties are not a simple decision to make. I've seen development teams swamped, overwhelmed and jaded from the reports they receive.
That was exactly what I was trying to convey: real vulnerabilities can easily be separated from non-issues quite quickly because the later mostly entail things which can be checked in a matter of minutes.
> You will never receive $100,000 for selling a vulnerability in PayPal. You probably couldn't even find a buyer for it on the "black market."
$100,000 may have been slightly inflated, but with a bit of creativity it isn't that hard to believe. Orchestrated correctly one could walk away with a few million dollars from exploiting such a vulnerability.
> This is all to say that I believe your outlook is not consistent with reality, with all due respect. Bug bounties are not a simple decision to make. I've seen development teams swamped, overwhelmed and jaded from the reports they receive.
Or perhaps you and I simply have experienced different realities - as I have not seen development teams swamped from them and have seen major security improvements come about as a direct result to a bug bounty program.
Of the perhaps 15-20 companies (albeit all < $1b market cap) I've spoken/worked with in regards to bug bounty programs or security in general - none of them were receiving more than a handful of reports a week which took up perhaps 2 hours of an engineer's time.
What about the non-issues that are reported with complicated conditions but don't actually work? Just because you can throw out the obviously bad items doesn't mean the rest are real.
>Orchestrated correctly one could walk away with a few million dollars from exploiting such a vulnerability.
Exploiting it is rather different from selling it, though, right? And since a vuln in a website can literally be closed immediately, and PayPal's got whole divisions dedicated to preventing and undoing the damage you can do even with "account takeover", it'd be rather much a risk to pay someone cash for a vulnerability. At the first slip, the value drops to $0. Plus all the issues of verifying the bug and establishing trust for both parties. Seems rather difficult.
With the state of the media in the infosec industry, having your finding widely publicized doesn't mean much, either.
I would know far more millionare engineers/hackers if that was actually true
> go to federal prison for 30 years
If one was talented enough to find such a vuln, it is hardly a stretch to say they would be smart enough to avoid getting caught.
... This is plainly not true. First, the ease of finding a bug in a web app varies considerably. This article, for instance, was simply about resending requests quickly. It doesn't necessarily require amazing intellect to come across such a bug. Look at famous "hackers" that dicked around with querystrings and got into all sorts of fun.
Second, even if someone is smart and figures out how to solve a certain problem to gain root, it does not mean they're clever, aware, or dedicated enough to maintain opsec. One mistake, any time, and you're toast.