The consensus against retrospective punishment is a lot weaker than people might expect, and who knows what new social crimes the future will bring.
How do you know if people cant express anger at someone? It is not mock question. I recently found out that colleagues who pretended to have good relationships (because we dont talk negatively about others as cultural thing) had long term resentments against each other. And those resentments were influencing work under surface in negative way - until it blew up into dysfunction which is how I realized.
That sounds good to me. I've never had to talk about these kind of things at work. Are there work places where this is unavoidable?
Otherwise, you're basically saying corporations should participate in the political process but individuals should not. And that's exactly how we got the Earth into the increasingly shitty state it is currently in.
However, I'm sure it's easier for my jobs since they were for a retail company and an engineering firm.
I'm going to go out on a limb and predict that if a facebook employee wants to talk about poverty in underdeveloped countries on internal social media, then that's going to be ok, but if the discussion concerns people who were harmed because they followed bad medical advice that was spread by use of facebook, all of a sudden that's an unacceptable socal issue at work.
You only get one-sided discussions because going against the grain will be career suicide.
A big part of the problem was just that everyone was using Facebook for work all the time. So quite often, there would be some enormous thread arguing about whether X or Y was the right policy, was Trump violating the rules and should be kicked off Facebook, or was Facebook's anti-Trump policies violating freedom of speech, or was it racist for an employee to say they supported Trump during a meeting, etc etc.
And you use the same interface for important things like, announcing hey this database service team is launching a new API next week, could you provide feedback on it. Type X of hardware is being deprecated next quarter. So you really have to be checking Facebook-for-work consistently for professional reasons. You have to scroll past the political debates all the time.
> was it racist for an employee to say they supported Trump during a meeting
This is sort of what I was thinking about in my original comment. I would hate to have to discuss political affiliation, or make public judgements on other people/issues.
> And you use the same interface for important things like, announcing hey this database service team is launching a new API next week,
I miss so much in my news feed already, using it for work would make me nuts.
I would say, FB did the right thing here, to not supporting a platform that actively politicizing itself.
Many:
Education. Healthcare. Corrections, law, and law enforcement. Social work. Public utilities and subsidized housing. Etc.
For people who believe that everything is political, there are no projects with less moral ambiguity, it's just more or less openly visible.
If you're writing accounting software for paper suppliers or something equally banal with few ethical implications, then sure, there's no need (and less reason) to have water cooler conversations about pro-genocide agitprop or whatever.
EDIT to add that of course not all departments at Facebook make the sort of decisions that have a marked social impact. More referring to the content policy teams, and the news feed algo teams, and so on.
The problem is distinguishing ethics from politics. These are very hard to disentangle, because ethical values are usually based on some political orientation. And I don't want Facebook to be making political decisions on my behalf, as a user. And I don't even want internal employee discussions to be derailed by political considerations.
So how do you distinguish ethics from politics? I don't think it's possible, unless the company defines its own ethical values, a priori, and only considers those when making decisions.
If you read the article, I think that this is precisely what Facebook's new policy is trying to do by putting a fence around "social issues."
The company's products are in no way social media platforms.
It's hard to square that with the algorithmic feed, likes, etc, which are making the world worse every single day in favor of engagement metrics. We've known for many years how destructive these are.
Facebook and Twitter could literally make the world a better place simply by disabling those kind of features. Just remove them. It doesn't get easier than that to substantially improve the world, yet it's not being done.
Whatever internal discussions you're having, they're not working. I'd posit that they can't work, because FB's entire business model is predicated on user-hostile, polarizing behavior, whether anyone internally will admit it or not.
It frankly does not matter one bit what things are like internally when externally we can see the harm FB has caused, and there is zero evidence that harm is going to stop.
That being said, it’s a good idea to understand why the pay is so high (and it’s not because they’re nice people who only want the best for their employees):
You will be expected to leave moral qualms at the door. This an unwritten rule at many companies, but Facebook had to write it. That says something.
You will be expected to work for it. Hard. The people I know at Facebook easily put in 1.5-2x the hours I do at a FAANG-ish (late nights and weekends seem to be the norm), but get paid roughly 1.5-2x what I do. If that’s a tradeoff you’re willing to make, go for it. I however am making more money than I know what to do with, and thus value all the time I’m not working (hobbies, travel, side projects, etc) way more than the money I’d make from working during that time.
At the end of the day you aren’t going to singlehandedly destroy the fabric of society all that much in your first year, so you’re fine making the above sacrifices for a year or two for some quick cash then fucking off to pursue some real interests, go for it. But I sincerely warn you against sacrificing too much of your life (youth especially) and morals for money —- it really isn’t as valuable as it’s cracked up to be.
It sounds like, from reading your comment a couple times, you know what is right but are tempted to ignore that and take the cash.
It's therefore hard to see how taking this offer would not be choosing to sell your ethics for money and success, given that you could likely land a well paid job anywhere.
Usually you see someone say something like this when they're presented with truly awful options. Seeing it used to refer to a $400k comp package is a bit jarring.
And if you've made it through FB's hiring process and they've given you an attractive offer, I find it hard to believe you don't have other options that don't involve a big ethical quandary, or wouldn't if you interviewed around more.
I used to loathe Facebook and like Google. These days both seems about the same. Facebooks policy to leave people alone deeply resonates with me even though I still dislike them intensely for what they did to WhatsApp.
And for what it is worth, Facebook unlike Google hasn't insulted me for a decade with the ads they show.
As a screw in the Facebook machine, your significance is trivial. This is true regardless of your intention.
Get over the ethnical drama I would say. Big tech is about as ethnical as banks. In another word, the companies don't care, and they are probably not.
Sounds like there's nothing left to abdicate.
No normal company is going to sign _any_ contract provided by a prospective full-time employee (except perhaps if you are a sought after celebrity being hired at a VP level or above), so it would just be a waste of time and money for someone to take your advice.
Even if the hiring manager personally wanted to, there is no process for doing this. They don't have lawyers standing by to review such contracts. It would probably be hard to even find out who would have the authority to sign such a contract.
Further, retaliating against whistle-blowers is already illegal, as is ordering employees to break laws, so I don't know what additional protection you imagine you would get from such a contract.
That seems like a bigger issue. If I am an activist and I poison the enormous dataset that's being fed to a ML model, is anyone even going to notice?
For example the work Google does on "de-biasing AI" is all about taking ML models and warping its understanding of the world to reflect ideological priorities.
The have also suggested a "meme cache" - one of the memes shown is a Folgers coffee cup which says "Best part of waking up, Hillary lost to Trump".
Based on this classifier and hits to the meme cache, "trolls" would experience things like auto-logout, limited bandwidth.
Under "when to trigger this" they also suggest the period "Leading upto elections".
So on the one hand this document seems well-intentioned because there's some bad behavior in these groups like raiding, doxxing, racism, etc.
Rather than focusing on behaviour like doxxing and raids, the approach suggested seems to be directed at a specific group. Why? In the entire universe is it only this group that engages in this kind of behaviour?
It also does a broad classification that lumps anyone sharing the same memes, or vocabulary with punitive action.
Also they associate the election with this, which seems especially puzzling.
The IRA and/or its successors or friends appear to have taken the same approach as Russian security services have with the rash of targeted murders in Europe, with a "this totally isn't our doing, but anyone slightly educated on the subject will recognize our hand, because we want them to be aware that it's us and we don't actually mind people knowing" wink wink nudge nudge threadbare veneer of disclaiming responsibility.
Normally, I wouldn't really care: the 2016 stuff everyone made a fuss about on social media was largely ineffective and at best served as a smokescreen to distract from their very successful actions outside social media--Buff Bernie is a lasting meme treasure and nothing more. This go 'round, however, they've apparently learned from their mistakes, and I'm seeing.evidence that personal friends _are_ receiving and and are influenced by their messaging.
I thankfully haven't really had to watch any family or friends succumb to the Fox News media poison, and thought my social circles largely insulated from that sort of problem, but I was apparently quite wrong--right about _what_ wouldn't influence people, but blind to the idea that other actors would follow the same model and create content that _would_ suck in their target audience.
https://twitter.com/evelyndouek is a good source of reporting about Facebook and other social media cos' continued lackluster attempts to stand up potemkin independent review bodies, if you want more info on the space and can stomach more disheartening news.
So the dastardly Russkies didn't intend that Trump be elected? Someone tell Rachel Maddow! This changes everything!
American coverage on their efforts was by and large terrible, at least from major outlets. Focused analyst coverage in the space has been a lot more nuanced, but nobody's reading that without an existing personal or professional interest.
The other half of that analyst coverage is that they rapidly became quite tired of Maddow and friends hammering on a very simple narrative that missed the point, but was very effective at achieving its actual goal, keeping consumers of major media on the left-of-center end of the American political spectrum engaged in their content and bringing in continued advertiser money. That tiredness is relegated to water cooler discussion on Twitter, however, so it's not going to shape major outlet coverage much.
When you work at Facebook you should know what's going on and what the company is doing and causing and trying to help fix it.
It sounds like leadership is asking employees to put the head in the sand - shouldn't a leader propose the opposite? What happened to move fast and break things?
But this cannot be done, despite all attempts to quiet the cognitive dissonance. Every employee of an evil company is evil.
Every political message lobbied for by the employer is the employee's political statement. Any claims to the contrary reek of hypocrisy.
If you work at Facebook, your work directly or indirectly supports Facebook's political decisions. Facebook just doesn't want you to talk about it. Because Mark and the executives make the decisions, and you're just supposed to follow orders. This is how it works at many other companies. But for a long time, Facebook was able to recruit people to work their by promising that they could 'change the world' and 'make a difference.'
Side note: One of Facebook's board members apparently enjoys the company of white supremacists. https://news.ycombinator.com/item?id=24444704 Will Facebook employees be allowed to talk about that? If you work at Facebook, how do you feel about that?
Why is that "by its nature"? I don't think that's "by its nature" at all. Phone companies and ISPs facilitate communication between people, but that doesn't necessitate they take political stances on what communication to allow. Why should Facebook? If certain language should be restricted, then laws should be written restricting said language and Facebook should comply with those laws. Nothing about Facebook's nature forces them to go beyond that and act as de facto language legislators.
Some of filtering is based on what the user wants to see, some of it is based on some notion of how "good" a piece of content is (scored by likes and engagement numbers), some of it is from advertisers paying to have their content make it through the filter, and some of it is Facebook deciding what should be seen and what shouldn't (mostly driven by their desire to keep you on the platform). Every single thing you see on Facebook has made it through a huge filter that ultimately decides if it's something you should see or not. And the inevitable outcome of building a gigantic what-information-do-you-get-to-see machine is that there are many, many parties trying to influence the machine.
Phone lines don't have that problem.
Taking a stance to not control what communication is allowed is a very political stance. It just so happens that, I believe in those cases, it's also a legally mandated stance; but, if it weren't a legally mandated stance, it would absolutely be a political stance, whatever they ended up saying.
Where a private company decides to limit free speech (or not limit free speech) is, 100%, a political stance when the laws have not been written that make that decision for them.
Even if we maintain a law around protecting companies that just host other people's content vs curating and publishing content, it could be seen as a political decision whether a given website and company choose to be on the publisher vs public content stance.
I'm forgetting the word for publisher vs ... whatever it is where they take no responsibility for what people post on the site; but I hope my point is clear.
These are key to the usability of any social network. And they are inherently biased. Any such organization also has to take money, so ads are also key to their operations, and they have taken political stands on ads too.
The attempted comparison to utility companies is not compelling.
* arguably they do now by limiting "scam calls"
Facebook doesn't "facilitate" communication in the same way a computer "facilitates" communication. It facilitates communication in the same way that a forum or book club or group of people facilitates communication. Groups require some moderation to remain popular. Facebook is driven to moderate by the market.
The ethics of being a/the major institution of mass communication in large parts of the world may not force FB to act as language legislators, but these ethics certainly should compel them to do so.
Relevant points:
- If FB’s status as a mass comms source is threatened, then the company itself is threatened. This threat can be due to a lack in trust in the platform and/or legislation that effectively legislates them out of existence (see below re free speech). This existential issue should compel them to factor language legislation into their corporate policies.
- Stockholders certainly care about FB’s status as a mass comms source even if no one else does.
- Stakeholders obviously care about this, too.
- Relying on governments to regulate mass communications is a Pandora’s box for FB since FB is an international platform.
- In the US, in order to facilitate and encourage free speech, mass comms laws are not particularly restrictive, but they are built on an underlying assumption about social-based regulation that generally hold up but seem to be completely broken with platforms like FB. If FB doesn’t address this issue, then the laws that end up addressing this issue may end up legislating FB out of existence.
To close, whether playing the language legislator is part of FB’s nature, an emergent property, or something else, there are very real reasons that FB has policies on regulating language. Whether they do this well or not is a completely different issue, but putting the onus on government legislators to address the problem with formal laws seems, at best, overly dismissive.
[1] https://www.getrevue.co/profile/themarkup/issues/probing-fac...
Because Facebook explicitly chooses how to construct the timeline it presents to its users, and becomes some of that timeline contains political content.
If Facebook were a dumb first-in-first-out aggregator, it wouldn't be political.
But it's not.
They were regulated as public utilities and common carriers. Facebook is not.
Just like any corporate decision, you have a small number of people who are relevant to making the decision, and they discuss among themselves. It isn't productive to have 10,000 people who are all angry if Facebook doesn't make the decision their way, and they each spend an hour complaining about it on Facebook, while claiming that they're doing work because they're discussing a corporate decision.
Probably gotta just do what Mark thinks is right and, should that not be clear, guess what he would think is right. And suffer the consequences yourself should you guess wrong.
I use Facebook to say in touch with old friends. If they want to share last Tucker's video, not sure why FB should block them if I don't block them.
My problem with Facebook is that it acts as radicalization pipeline by channeling lies that are too crazy for mass market media into the minds of people who are susceptible to believe them.
For a recent example in America, Facebook was used to spread propaganda to Republican rural residents in Oregon by telling them that Democrats were coming to light wildfires in their towns. Oregon rural people responded by setting up their own vigilante checkpoints.
It goes without saying that the claim that Democrats were coming to set rural Oregon on fire was a lie, but that lie spread like wildfire through Facebook, and it was a lie that could have very easily resulted in fatal violence.
To make matters worse, many of the too-crazy-for-mass-media ideas spread on Facebook are the product of astroturf propaganda and are not organic. The problem is not people sharing videos of Tucker Carlson, or even individuals spewing racist diatribes; the problem is well-funded right wing propaganda groups using Facebook to distribute material that encourages people to hate--or even kill--their fellow citizens.
While I agree that judges should make the ultimate decision on what is acceptable speech, the judicial system just isn't fast enough to respond to the speed of Facebook-propagated propaganda. Society needs a better solution; we can't wait for a perfect one.
Facebook is pouring gasoline on the fires of social division. This is not just unethical but is extremely dangerous to social stability and it ought to be stopped.
Most of its employees job have nothing about politics. They imagined such relevancy themselves to create a greater significance/satisfaction from the daily mundane job.
My parents are very much of the "no politics at work" generation and I really question why that cultural strain has carried itself into 2020 since it only serves company board members/executives and categorizes rank and file employees as automaton code monkeys who should "shut up and type".
Armchair thought: in this odd period of history where, ostensibly, capitalism "won" as the political system of choice and "the end of history" was declared we have entered an alarming stage of hyper-capitalism mixed with growing discontent/civil unrest. More than ever there seems to be a breathless determination by upper-middle class professionals to not rock the boat in any way in the hopes that these mega-corporations will continue to prop up the stock market, pay out outrageous salaries, and keep the gravy train running. It's a kind of cognitive dissonance where we can see how much damage the big players in tech are wreaking on global society - there's ample evidence - but to recognize and face it would sully the deeply held ideal that tech is some kind of great, benevolent force in our society (more cynically: confronting it would also mean confronting that fact that we as tech workers have ethical responsibilities to society at large that we have at best ignored, at worst defied).
Practically, it's not. Yes, you can catch up on how your cousin's new baby is doing, but you can't disentangle that from the extremist propaganda, disinformation, and real harm that these platforms incur by leveraging human psychology against us. Taking the view that ethics and work are separate silos is hopelessly naive. Almost every profession requires constant awareness and ethics in order to be a benevolent force: doctors, lawyers, builders, scuba gear manufacturers, car designers all have a responsibility to their end user and I can't see how tech is any different. I doubt people would react the same way if this were GM instead of Facebook and their employees were up in arms after learning the car they had been designing and building had a track record of blowing up and killing people.
On a more personal side: I honestly cannot stand when most people discuss politics in the Slack at work. The vast majority of comments are snarky, are unsupported (by data) opinions, or are caustically dismissive of opposing views. It's bad enough when people holding political views I disagree with engage in that behavior, but it's much worse when people I do otherwise agree with do. And it happens in just about equal measure, as far as I've experienced.
Work is already stressful enough without adding to it with political fights.
These issues are not part of your job description. You were hired to write Javascript, not to set corporate strategy. Sit in your seat, content yourself with the $500K/yr you're being paid by your betters, and refrain from sharing with everyone else your facile moralism.
Why would they give up control of the world by doing something silly like that? Think about how much political influence Twitter has based solely on which tweets they show the President and corporate press. Consider how much untraced in-kind donations these companies can make by tweaking which news stories you see. The crazy thing about it is these things can be tweaked by humans, but it's largely controlled by AI now, which no one person will completely understand what's happening in any of these systems. We're in the early stages of AI controlling the global political future and it will tend to create whatever kind of future generates the most clicks. It's kind of like the game Universal Paperclips, except with clicks/rage/ads.
I hope you take this as kindly as I intend it, but what you're proposing is a conspiracy theory. This is a relatively nice attribute for a theory to have, because it gives you a nice heuristic for deciding whether the theory is true!
The likelihood of a conspiracy being true decreases as the number of people with knowledge of the theory and an incentive to report on it increases.
To take an extreme example, if the moon landing was faked, tens of thousands of people have somehow held on to that secret. Tens of thousands of people who could gain overnight notoriety by telling their story, and hundreds would have the proof required to gain even more popularity. The fact that nobody has ever broken ranks is a strong sign that the moon landing was not faked.
"Twitter and Facebook are secretly tweaking which news stories Trump and the rest of us are seeing" isn't a conspiracy on nearly the same scale as a faked moon landing. It requires some pretty incredible things to be true though.
- Maybe every employee knows, and none of them have decided to say anything, despite the large incentives to reveal the secret and win their moment in the limelight.
- Maybe not every employee knows, just enough employees know to implement it and hide that implementation from the others. Maybe every employee on the Algorithmic News Feed team knows. I don't know how Twitter and Facebook are structured, the team probably isn't called Algorithmic News Feed, but as one of the more important systems both Facebook and Twitter must dedicate at least a hundred engineers. So, 200 people were quietly chosen for their ideological purity and ability to keep a secret from their peers. These 200 people write code in secret. Somehow they commit lies to the monorepo and apply private patches to the code before deploys. The SREs must also be in on it, because those private patches will still show up in traces and their bugs will show up as errors. All of this happens inside Facebook, a company notorious for employees who speak up and expect transparency. It also happens inside Twitter, a company with such lax controls that until just recently thousands of people could use the internal admin tool to take over any account.
I don't know, I guess it's possible? Maybe you have a better idea for how it could be happening, but it just doesn't seem very likely at all.
(FB == Fish Bowl)
Just less than 10 years ago, it would've been considered very rude to push your religious or political opinions on to others, especially when it's a professional setting it would've been considered highly unprofessional. But nowadays that line doesn't seem to exist anymore.
Some people are indeed immune to covid, babies too, most probably. I've personally heard of numerous cases of persons not getting the virus at all while their spouse was in intensive therapy or worse.
Pre-2012 Facebook was awesome. Now the feed is almost exclusively bullshit from people I don't know.
Or they could just not show posts to groups you're not in and from pages and public profiles you don't follow! Allowing something to interject into your newsfeed should be opt-in, but right now it isn't even opt-out, except for not logging on at all. It would also be cool if there were a way to opt-out from seeing shared posts selectively for people on your friends list, e.g. I want to see things that Overly Political Relative posts themselves, but not things that they share from other places.
That being said, I deactivated my Facebook account a couple years ago, so I'm no longer a user whose opinion they should theoretically care about anymore.
Giving people tools to make sub-lists of certain friends/groups/etc. in order to organize their experience better (on their terms, not at FB's whim) would be great, too.
You're swearing off the internet entirely?
No one has to use Facebook.
I applaud the move.
[1]https://www.bloomberg.com/news/features/2020-09-17/facebook-...
I guess he realizes they don't work.
Because people don't typically use Facebook at work? It's a recreational tool. The issue is that for Facebook employees, they are working on building that recreational tool, and that hampers productivity and professionalism.
> I guess he realizes they don't work.
Or he realizes that they work for a service that's typically consumed during one's free time, and not as a direct component of one's employment.
It feels very old fashioned, but are we not getting a little burned out by a world where people openly nail gun their identity politics to the mast?
When I were a lad (way back in the nineties) I was taught it was rude to talk about politics, religion, or money. This applied to anywhere one was in polite company, not just at home, and definitely not at work.
On one side, some people have an interest in not accepting that their financial success is arbitrary and illegitimate. On the opposite side, some people feel that they have been locked out of an arbitrary wealth transfer and so they have a strong interest in not accepting that they're incompetent losers and that they deserve to be at the bottom of the food chain because they didn't time the market right (a highly speculative and irrational market too!). Or maybe they didn't pass the Facebook whiteboard test job interview questions several years back (which is also an arbitrary hiring process by many accounts)... So basically they missed out on a huge opportunity because of some fickle arbitrary reason.
I don't think blocking discourse is going to improve things. History has shown time and time again that preventing free speech will stop people from finding compromises. The only solution to the worsening problems will be violence.
If the elites keep suppressing speech, the result will be worse than WW2 and the elites will not stand a chance because it will be fought on their own turf... The elites won't even know who their enemy is. Their own friends and family members could be against them. They won't even realize it until it's too late.
The right thing to do is to find political solutions. I personally think that UBI (Universal Basic Income) would solve most problems. It wouldn't fix the wealth gap immediately, but it would fix the mechanism which is suspected of causing arbitrary (centralizing) wealth transfer and that would at least level the playing field.
UBI is a really good compromise. If the elites are so confident in their superior abilities, surely they have nothing to lose by leveling the playing field right?
BTW, I currently earn 100% passive income so I'm actually saying this as someone who is on the winning side... I've come so close to complete failure - I leaped over the crevasse in the nick of time; the system's fickleness and arbitrariness are crystal clear to me. I'm currently standing on the winning side of a very deep precipice and I can see legions of talented people running straight into it.
“If you don't stick to your values when they're being tested, they're not values: they're hobbies.”
― Jon Stewart
(Many said something similar, but I just love Jon Stewart)
I see so much debate about what's right to do within FB, "how will people change the structure from the inside with this rule?", etc.
QUIT. Just quit. Seriously. Make it public why you quit. Quit en masse. FB is not a good company. Your talents are useful in many other places.
Yes, I'm privileged in saying this. No, I wouldn't feel comfortable quitting my job right now.
But if you believe enough that FB is an evil company--as many of us have known for 10+ years now--you should not work there.
If they are doing bad things, and they are not open to people fixing said bad things, stop helping them do bad things.
Why does quitting help?
Consider IBM. Their revenue is about 20% of its peak. It used to be seen as a monopoly power. Now we barely think about them.
I think that IBM has made itself an unattractive place for employees where it used to be seen as an extremely prestigious place to work. And I think poor quality employees, along with average to mediocre management has squandered an incredible, dominant company over the past 20 years.
Facebook will decline if all the most desirable employees just quit. It’s basically just math - those who know the most, interview best, and have the most accomplishments will be able to leave the fastest. Facebook will be left with the D team and they’ll get taken to the cleaners by competitors (as they already are with TikTok - and what’s the average age of the most engaged users of Facebook again?)
Anyway, the point is, you quitting hits a company in precisely the right place - their wallet. Employee turnover is tracked and costs companies money. Higher turnover does bring about changes.
That's fine, let them. "If I don't do it, someone else will" is a poor justification for anything.
It's a horribly dystopian opinion that disparages the moral action and inhibits good people from doing a good thing: taking a stand against unethical behavior.
I did not vote for them and I would rather have the people I did vote for (and that I can stop voting for) solving those problems.
That is just me though. I'm sure many other people would much rather have their problems solved by Facebook employees than elected representatives, I mean I for one also think that most people elect literally the worst people in the world and would rather have them ruled over by unelected clerks at facebook.
The sooner this fucking election is over, the better. No more having to read about Marxism, Trump, Racists, Snowflakes and Trannies.
I downloaded nVidia Broadcast a while ago, it's really quite good.
Why does it sound good to anyone that Facebook employees should be prevented from discussing the ethical implications of the product they sell their labor to create? Facebook complete lack of accountability - internal or governmental - has to date:
- incited a genocide [https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...]
- provided a bias for right wing content in a American election year (and fired the employee who blew the whistle on it) [https://www.independent.co.uk/life-style/gadgets-and-tech/ne...]
- exacerbated a global pandemic, indirectly causing 1000s of deaths, by not policing Covid misinformation [https://www.theguardian.com/technology/2020/aug/19/facebook-...]
- is arguably a contributor to the global rise in authoritarianism [https://www.theguardian.com/commentisfree/2020/feb/24/facebo...]
and that's really just the tip of the iceberg. If you buy into the notion that Mark Zuckerberg is a nice man in a hoodie trying to run a business that his employees are tearing down with some radical agenda then I'm sorry, but how naive are you? Facebook has a track record of ignoring the consequences of what happens on their platform in order to continue profiting. It's not a mistake, it's the point.
We should be cheering on tech workers challenging the ethics of the work they produce, not talking about how inconvenient it is for Facebook workers to start realizing how questionable the product they're building really is.
It's unfortunately very much in the interest of Facebook's leadership team to discourage it, however, as a clock in clock out, see and hear no evil labor culture is good for the leaders' personal wealth, so ethics be damned, number go up.
Nobody is stealing anything, as the rules are set right now influencing public opinion through media channels is not seen as "stealing". If the powers that be were to physically alter the votes and the voting process that would be another discussion, but almost everything presented in the media is fair game.