I get into your home by bypassing (poor) security. I take pictures and make copies of anything inside. Then I publicly announce the breach and demand that you fix your security based on a deadline I made up. Then I say "trust me, bro" when I promise to never reveal the data I stole.
Nobody would find any of that moral. The analogy breaks down because your home is not a place where sensitive data of lots of people is stored. But even then if you'd do the same thing in a physical place where this would be the case, you'd simply be arrested, if not (accidentally) shot.
I do agree that these security researchers are ultimately doing a good thing, but they should not be this naive and aggressive about it.
I'd say you checking the front door to find it unlocked, then taking a few pictures for proof is perfectly moral. In this case, I think most people would agree it is a step too far to expect you to come to me first, rather than immediately announcing to the entire neighborhood that I'm being incredibly lazy and reckless with their valuables (on top of outright lying to all of them).
> So we did what any good security researcher does: We responsibly disclosed what we found. We wrote a detailed vulnerability disclosure report. We suggested remediations. And we proactively agreed not to talk about our findings publicly before an embargo date to give them time to fix the issues. Then we sent them the report via email.
This is why the whole “I can’t believe my classmates threatened legal action” line of thinking doesn’t make sense. They weren’t acting like classmates themselves. They were acting like professionals. I imagine the embargo date wasn’t well-received.
It’s also interesting that they listed all of the steps they followed that a “good security researcher” would do. So why didn’t they start with communication first before trying to hack the system? Good security researchers do that. (Not all of the time, obviously.)
> Well, a me and few security-minded friends were drawn like moths to a flame when we heard that. Our classmates were posting quite sensitive stories on Fizz, and we wanted to make sure their information was secure.
> So one Friday night…
And this is where the “good-faith security research” line of reasoning broke down for me. Think about the wording. To my ears/eyes, those sentences above seem like a carefully crafted but still flimsy excuse. It’s like a lie that you tell yourself over and over so much that you end up believing it. It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.) I guess I’m saying that I’m just not convinced. I don’t buy it.
But I get it. Articles need to be written. Talks needs to be given.
(And yes, I do believe that Fizz didn’t need to threaten legal action.)
I don't think that is true. I think it would be very unusual for an independent (not a pentester) security researcher to communicate anything before they have any findings.
> It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.)
I don't get it. Good faith research is fun. Most people don't get into the industry because they hate the work. I don't even understand what you are trying to imply was in their mind that would disqualify their actions from being in good faith.
I don’t feel a need to fully address all of your comments (because the first one was just your opinion similar to my own opinion). We can each look up stats for this.
But your second comment (also an opinion as mine was) did stick out to me due to emotional/psychological/human reasons, I guess:
> Good faith research is fun.
I was speaking about intention. I’m not convinced that “research” (whether it was good or bad faith even) was the goal here.
(FYI, I know the author of the post said this was written and talked about before. All I did was form an opinion based on his summary of the events for this specific HN post. I assumed it would have all of the salient information. But if there’s something missing, please point it out.)
It’s a cool story though.
Because it seemed very irrelavent to if they were good faith researchers. I dont know if i agree - critiquing classmates designs is a quintessential classmate activity - but regardless i don't understand how this connects to the rest of your point. Say they weren't acting as classmates. How does that change anything about if they were acting as good faith security researchers, which is the point under contention.
> All I did was form an opinion based on his summary of the events for this specific HN post
Just because its an opinion doesn't make you not responsible for it.
I don't know what was in these people's hearts and minds. They could be secretly evil for all i know. However i think its morally wrong to call someone immoral without positive evidence they were acting wrongly or had bad intentions. Yet you seem comfortable calling them immoral basically on the sole basis that the work took place on a friday and a misreading of a document that they referenced but not even in the part of the document they were referencing? You allege they have an ulterior motive but you don't even put forth what that motive might be. Like respectfully, i think that's kind of a shitty thing to do. These are real people and deserve to be judged based on the facts and what can be drawn from the facts.
I think they should negotiate a security test beforehand. For their own sake but also to get a buy-in. And if a company categorically refuses, you can then publish that, or share that you worry about a lack of track record in known security audits. That's a professional way to hold them accountable.
Breaking into a system unannounced and then stating "do what I say...OR ELSE", is neither legal nor professional. When you're surprised that this will be perceived as an attack instead of being helpful, I don't know what to say.
Correct. This is why I believe they (or at least some of them) weren’t actually surprised lol.
> If you can’t tell from his wisdom, it was not Cooper’s first time dealing with legal threats.
This is a quote from the post. The author acknowledged that his fellow researcher was experienced with interacting with lawyers for exactly this kind of scenario.
Red flag. Red hat?
Otoh, it sounds really different if you break into your own home.
I think part of the issue is with everything in the cloud your data is no longer local (like it would have been back in the day), but you (or the custumer public) still has an interest in knowing if the data is secure, an interest that is at odds with the service provider who often has perverse incentives to not care about security.
But I don't agree with the reductive take that compromised security means companies don't care or are greedy. Companies that do care and have an army of security staff still fuck up.
The reality check is that security is incredibly complicated, expensive, very easy to do incorrectly.
If anything, us software developers should do some reflection on our software stack. It's honestly quite shit if it requires daily updates and a team of security gurus to not get it wrong.
Indeed. Which is what i meant by perverse incentives. You can generally make more money by ignoring security or doing the bare minimum. Doing security right is expensive, and the consequences of doing it wrong are usually not that much at the end of the day (for the company anyways, the users might be screwed). All this adds up to rational actors under investing in security. And honestly it is hard to blame them.