If some less ethical hackers got a hold of that data, much worse things could have happened.
* that's the biggest red flag. A company saying 100% obviously has very little actual security expertise.
PS: I'm a big fan of Germany's https://www.ccc.de/en/ who have pulled many such hacks against some of the biggest tech companies.
I get into your home by bypassing (poor) security. I take pictures and make copies of anything inside. Then I publicly announce the breach and demand that you fix your security based on a deadline I made up. Then I say "trust me, bro" when I promise to never reveal the data I stole.
Nobody would find any of that moral. The analogy breaks down because your home is not a place where sensitive data of lots of people is stored. But even then if you'd do the same thing in a physical place where this would be the case, you'd simply be arrested, if not (accidentally) shot.
I do agree that these security researchers are ultimately doing a good thing, but they should not be this naive and aggressive about it.
I'd say you checking the front door to find it unlocked, then taking a few pictures for proof is perfectly moral. In this case, I think most people would agree it is a step too far to expect you to come to me first, rather than immediately announcing to the entire neighborhood that I'm being incredibly lazy and reckless with their valuables (on top of outright lying to all of them).
> So we did what any good security researcher does: We responsibly disclosed what we found. We wrote a detailed vulnerability disclosure report. We suggested remediations. And we proactively agreed not to talk about our findings publicly before an embargo date to give them time to fix the issues. Then we sent them the report via email.
This is why the whole “I can’t believe my classmates threatened legal action” line of thinking doesn’t make sense. They weren’t acting like classmates themselves. They were acting like professionals. I imagine the embargo date wasn’t well-received.
It’s also interesting that they listed all of the steps they followed that a “good security researcher” would do. So why didn’t they start with communication first before trying to hack the system? Good security researchers do that. (Not all of the time, obviously.)
> Well, a me and few security-minded friends were drawn like moths to a flame when we heard that. Our classmates were posting quite sensitive stories on Fizz, and we wanted to make sure their information was secure.
> So one Friday night…
And this is where the “good-faith security research” line of reasoning broke down for me. Think about the wording. To my ears/eyes, those sentences above seem like a carefully crafted but still flimsy excuse. It’s like a lie that you tell yourself over and over so much that you end up believing it. It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.) I guess I’m saying that I’m just not convinced. I don’t buy it.
But I get it. Articles need to be written. Talks needs to be given.
(And yes, I do believe that Fizz didn’t need to threaten legal action.)
I don't think that is true. I think it would be very unusual for an independent (not a pentester) security researcher to communicate anything before they have any findings.
> It seems like the researchers just wanted to have some fun on a Friday night (like he said). (And there’s nothing wrong with that. But to characterize it as only doing “good faith security research” seems like a stretch.)
I don't get it. Good faith research is fun. Most people don't get into the industry because they hate the work. I don't even understand what you are trying to imply was in their mind that would disqualify their actions from being in good faith.
I think they should negotiate a security test beforehand. For their own sake but also to get a buy-in. And if a company categorically refuses, you can then publish that, or share that you worry about a lack of track record in known security audits. That's a professional way to hold them accountable.
Breaking into a system unannounced and then stating "do what I say...OR ELSE", is neither legal nor professional. When you're surprised that this will be perceived as an attack instead of being helpful, I don't know what to say.
Otoh, it sounds really different if you break into your own home.
I think part of the issue is with everything in the cloud your data is no longer local (like it would have been back in the day), but you (or the custumer public) still has an interest in knowing if the data is secure, an interest that is at odds with the service provider who often has perverse incentives to not care about security.
But I don't agree with the reductive take that compromised security means companies don't care or are greedy. Companies that do care and have an army of security staff still fuck up.
The reality check is that security is incredibly complicated, expensive, very easy to do incorrectly.
If anything, us software developers should do some reflection on our software stack. It's honestly quite shit if it requires daily updates and a team of security gurus to not get it wrong.
The laws that would apply to unsolicited pentesting make it undesirable to perform it.
Thus, society as a whole is less secure because someone wants to protect companies from hacking.