They further said that if you need to access any sensitive personal data, or if you need to log in as a user in order to debug a problem, you need to have approval from your manager _before_ the access, not after.
Also, you are not allowed to access the data of anyone you know personally for any reason whatsoever. You have to find someone else to do that if it needs to be done.
Finally, they really do audit every single access of personal data. I had every reason to believe that if I accessed any data improperly, I would be fired within the week if not the day.
I don’t know how much abuse still exists despite all of the above, but I don’t think this article does a good job of explaining how seriously Facebook takes this.
But were you still able to just look at the data or login as the user without the permission? I think that's the key question.
Talk is cheap. As a user it's not good enough for me that people are being told internally not to abuse their access. Just remove the permissions from the employees and make them request the permissions for each individual case instead of trusting the employees to follow the rules.
You could at the time start trying to log in as a user and MULTIPLE red warnings came up that proceeding further would automatically notify your manager and skip of access and a reminder of data policies. Now at that point I did not go further but I did know that content moderation and security teams had special access so I imagine they did both, heavily warn avg FB eng AND restrict access.
Which explicitly also includes yourself, because looking yourself up would e.g. let you see who has you blocked.
You're also fairly unlikely to access personal data by accident. You have to explictly go look for it in the internal tooling, which has pretty good signage around interfaces that could potentially expose you to personal data by accident so you know to be careful (I did a couple of tickets for the abuse team and testing that stuff was riddled with interstitials asking if I was sure I wanted to access personal data). "Oops I didn't notice" just doesn't fly.
They're also fairly good at removing the semi-legitimate reasons you'd have for accessing personal data. If you have friends or family that are having some sort of issue, they have a separate priority queue you can submit requests to so they'll look into those issues for you, for example. If you need test data, there's great tools to generate test users with all sorts of weird configurations (so you don't have to rely on finding a live one that meets your criteria)
Nowadays we are seeing many systems switch to a regime where you have to get another engineer to sign off on any access to production, and your access is limited to at most 24h. This isn't merely a policy -- it is enforced by technical controls that forbid ordinary human-user access to production. I literally cannot even send an RPC to services I work with that handle private data without getting a colleague to sign off on it.
These days things are mostly working the way you describe - I need to request permission to view my own service’s logs, and I’m working in backend infra not going anywhere near user data (logs are like “did we hit any hardware errors when trying to install the OS on this host?”)
(Messenger had stronger protections than OP is describing)
Yet TFA contains a quote about how abusing personal data is "against Mark's DNA". Horseshit.
Facebook is the enemy.
[0] https://www.esquire.com/uk/latest-news/a19490586/mark-zucker...
Never understood why anyone would trust this guy if that was the case. Pervs are some of the most reliably untrustables on the planet.
The idea that people can just go in and access personal data at Facebook without some sort of actual pre-authorization is insane.
None of this should even be possible.
A friend of mine worked at a large bank in customer service and this was also a big part of their training, and there was even a speech trainees were given before going to their desk at the end of training. He said, almost invariably, that at least one person from every class was fired within hours for looking up the accounts of someone they knew or a celebrity.
While I don't doubt people do this for real, staging something like that might actually be pretty effective.
We should not rely on the goodwill or internal guidelines of a single company in such a sensitive topic.
From the article: Facebook fired 52 people from 2014 to August 2015 for abusing access to user data
> Facebook employees were granted user data access in order to “cut away the red tape that slowed down engineers,” the book says.
If we can take a step back, this is a totally reasonable policy. Unfortunately Facebook is facing the reality of the law of large numbers in that once you have 1000+ people the chances of having a bad actor in your system is much higher than 10 people.
Maybe this is a hot take, but I for one prefer that my company trusts me to do the right thing rather than make it hard to do my job. I'm not saying that there isn't a solution for this, but behind the "facebook corporation" there is generally just a bunch of engineers that want to do a good job at work.
Yeah it sucks, but it's part of the job. Start thinking about the people you're supposedly serving instead of yourself first. I'm pretty sure that the overwhelming majority of facebook users want to hear about tighter privacy protections at facebook, not fewer.
Yes and banks shouldn't lock their vaults or safe deposit boxes because and just trust that all of their employees just want to do their jobs.
> Stamos suggested tightening access to fewer than 5,000 employees and fewer than 100 for particularly sensitive information like passwords.
I'm sorry, what?
I can tell you the number of legitimate engineers that should have access to user's passwords.
It's a nice, round number.
It's zero.
Just because passwords are hashed doesn’t mean you can give access to them willy nilly and happily claim that “zero” people have password access.
Agreed.
> “Zero” implies that hashed passwords are not passwords, since otherwise you won’t get to zero.
You can get to zero:
- No humans in the serving path servers' ACLs.
- Diagnostic/recovery servers for humans which require the person submit a justification that links to a ticket/bug/outage, wait for a second person to approve, perform high-level operations that affects sensitive data ("restore user from backup at timestamp T") rather than exposing direct access ("read from backup", "write live user"), and keep an audit record for later.
Everything is about trade-offs. This approach takes more engineering time to set up and if not done well can really slow down common tasks. And there are certainly reasons there might be exceptions—eg allowing the primary on-call to have unilateral access can speed recovery over waiting for a second person to be available. But zero is possible, and stories like this remind us of its value.
I have a hard time believing that Facebook would store user passwords without at least hash + salt which makes it virtually unrecoverable.
Client side hashing could solve this, but almost no one does it.
Some insecure websites (not Facebook) may not do this, and instead store your credentials in a database without encryption. It's a terrible idea, and GP's comment seemed to be referring to this when they (correctly) suggested that no one should have access to a database of plain-text passwords.
The replies mostly refer to the fact that even if the password never hits FB's database, there is still code running on authentication servers that handles that password in plain text before it's been encrypted. Limiting engineer access to authentication servers is a good idea, but it'd be challenging to prevent ALL engineers from having access.
Any employee logins are done through skeleton keys that are audited.
I know they've been locked down since I've left, but some of the tools we were allowed to just freely access at Uber were a tad scary, to say the least.
I'm sure every company with a very large userbase, such as Facebook/Microsoft/Google/etc claim they have internal protections/checks but have even more holes like this.
Every Googler gets the message that you keep your mitts off private information in logs (or get terminated) drilled into them in their first week of training. Logs access is a) restricted b) audited c) tiered and d) enforced. That was the case in 2011 when I started and it's the case now.
Not saying Google is perfect, but it's not like companies like FB didn't have a template for privacy standards that they could have followed.
All that said, but back then I just personally assumed that this is how FB was operating :-( I am hoping they've improved since.
People with direct access to production data streams mostly see encrypted data. There’s a big, annoying technical scheme in place to make sure private or sensitive data is elided whenever a message is printed in plain text to a log. It stretches from the protocol compiler all the way down to C++ stream operators.
I’m not saying nobody ever sees user data. Sometimes you need to find out why an email is crashing the mailer. But those accesses are limited in scope, auditable, and available to relatively few people.
In my opinion it isn’t really the Facebooks and Googles of the world you need to worry about, it’s the companies with massive collections of user data and without the technical chops to protect it. Like Dropbox.
The idea that:
a) User data access is not just allowed but normal (or at least that it was at one point)
b) That it's allowed at all so widely
c) That (a) and (b) are true despite repeated abuse
is absolutely insane. "Nearly every month" is insane. It should be criminal, but it isn't.
Sadly, it's all too common for engineers to have way more access than is necessary, though this seems extreme. I see no reason why any engineer, outside of extreme circumstances that should set off alarm bells, should have access to sensitive user data like passwords. It should generally not be the case that direct access of data is needed at all.
Where I used to work, user activity/transactions data sent to us would be stored on a single giant nfs volume. If you were added to a Linux group you can full, unaudited access to everything. Whenever someone tried to build anything that would restrict and audit access there would be a ton of pushback from engineers and customer support who loved being able to ssh into a machine and have full access to everything.
My advice is stub something out up front, before you go to production. You don't have time to do it right, but you do have time to establish the norm. Even if your audit trail is just a two-minute DB trigger that records that Worker Bob changed Customer Alice's password yesterday at 11, make it clear that there needs to be an articulable reason at hand for having used mechanisms that may violate users' trust.
As a side note, he was mostly only interested in having hook ups and orgies with Ukrainian tourist women, while in Egypt, made even more bizarre when we found out he has a wife back in the United States, of which, worked at Apple.
He was not a well liked guy and he was very rude to the Egyptian natives, especially towards the Bedouins.
By 6 degrees of Kevin Bacon there surely is a connection to one of these 16000 people in your bubble, hence the secrets are theoretically out, too. Why should they have the advantage over you and potentially blackmail you?
/s
Do similar processes and consequences apply in the worlds of the your banks, credit card company, Experian, Equifax, the NSA, FBI, and other groups, both government and commercial?
Ideally, your data should be encrypted and no one at Facebook should not have access to it. Only those people whom you have chosen to then share that data should have access to the degree given (crypto wise, maybe this means you decrypt and broadcast to those people like email). Any reason why a Proton-like model wouldn't work for technical reasons? Facebook could still make money off ads, but those ads would be less targeted. Good. We need less targeting.
I smell lawsuits going for the very deep pocket of Zuck . . . .
I bet this is incredibly common, and far more so at lower profile and even shadier surveillance capitalist companies.
Everyone here is unsurprised by this and at this point I expect the social networks to just abuse my user data anyway. They won't change and they will never stop this.
Who is to say that this is already happening with the other social networks that are scooping up our data but in 5 years time will only admit their actions afterwards.
Maybe they are all doing this as we type.
To Downvoters: So you think that these social media companies are NOT abusing our data? There's tons of evidence of this everywhere, including this confession.
There can only be one explanation of why I'm getting downvoted heavily of an undeniable known fact and it is likely that it is by those working at these companies because they know that I am right and the point still stands regardless of any downvotes (and censoring of the truth).