Adam, I don't know you -- I came here from the Buzzfeed article criticizing the ethics of the study that linked to this post, but it appears we do have a friend in common.
I just have to ask - you honestly had a hypothesis that amounted to 'perhaps we can make people more depressed,' and decided to test it on a group that hadn't consented to the experiment, with no way to track its impact on their actual lives, only on the language they used in their Facebook posts? And you ask us to trust that this passed an internal review so it's ethical?
Please take a moment to step back and consider that. That appears to have been the train of thought that led to this.
That's appalling. Completely appalling. The Atlantic piece is right -- there's absolutely no way this passes APA deceptive research standards.
Beyond that, you'll never know what impact this actually had on depressed people. You can only measure what they posted to Facebook, which isn't a particularly meaningful or realistic indicator of their emotional state.
If this passed an internal review board, that's only proof that Facebook's internal review standards aren't what they need to be.
You're in a position of extraordinary power, with access to more subscribers than any other field study in history, a larger population than most nations, and subject only to how you review yourselves. You could deceive yourself into believing you have informed consent because everyone clicked 'accept' on the Terms of Service years ago, but there's no way even you think that's a meaningful standard.
I trust you're a reasonable person who doesn't set out to cross ethical boundaries. But on this one, I think Facebook needs to admit it did and make some changes. This study was unethical by any reasonable standard. There's nothing wrong with admitting that and figuring out a way to do better.
There's a lot wrong with going ahead with anything like this, ever again.
If the study is unethical by any reasonable standard, does this amount to condemning the whole company, even the whole industry? Because if this is unethical, you are calling out an extremely widespread practice...
$foodcompany recently mixed some small amounts of a known poison into their product to see how their customers would react. This is nothing more than a tweak in their recipe - much larger modifications occur all the time. How can a reasonable person be upset by this?
They designed an experiment where there was a serious hypothesis that it could lead to depression in the people subject to it. That has the potential to be actually harmful.
I still defend Facebook's right to do research, but they need to take more care to avoid harm.
The industry as a whole doesn't perform experiments designed to depress people. There could be other unethical experiments too, but they need to be judged on a case by case basis.
> Nobody's posts were "hidden," they just didn't show up on some loads of Feed.
How is hiding any different from not showing up?
> And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it.
Not what your own study claimed.
> I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused.
We are not sorry about the research, but only for "the way the paper described it".
> In hindsight, the research benefits of the paper may not have justified all of this anxiety.
In hindsight, our users are hyperventilating frogs. They should learn how to relax in the nice warm(ing) Facebook waters.
Do you understand how widespread this kind of research is? Literally everyone does this.
The act of publishing can't be the ethical breach -- just focus on the research, what do you think they did wrong there?
One of the main objections I'm seeing from people (in my bubble) isn't that Facebook did this, but that Cornell, UCSF, and PNAS participated in this. Facebook can do this, and while it's unethical it's not illegal. Same goes for manipulative people in your everyday life (let me not tell you about the horrific human being of a girlfriend I once had). The point is that science and the people who purport to carry it out should be held to higher and rigorous ethical standards. If those standards are not met, those people should be excluded from science and their findings ignored. They should not be awarded serious consideration in a journal such as PNAS. That is what is happening here as far as I can see, and while a bit dramatic fashion I think it is correct.
Also, if I may toss my personal interpretation of the research into this... ethics aside, the study is extremely weak, and I honestly don't see how it can be published in such a "good" journal. The effect size was < 0.0001. They hand-wavingly try to explain that this is still significant given the sample size. I'm personally not convinced, at all. Sounds like they needed a positive conclusion out of the study and so they came up with a reason for one. If this landed on my desk for review I would have reject on that alone.
Ignoring the classic "I'm sorry you were freaked out" non-apology here, this response completely misses the point, and tries to re-frame the concept of informed consent into an inconsequential piece of red tape that anxious people worry about.
People were made subjects of a scientific experiment without their knowledge or consent. It doesn't matter that Facebook took steps to make the changes as small as possible; it doesn't matter that the effect on the individual subjects may have been minor; it doesn't matter that the results were interesting. It was an ethical breach, and this tone-deaf response is fairly unsettling.
In general, A/B testing is studying the website. You're not testing us, you're testing the best ways to convince us to do stuff. On top of that, when users visit your website (which is presumably pitching something to them), they know they're being pitched. They know the website is going to try to convince them to do stuff--click over here, watch this video, sign up for foogadgetmcawesome. Same drill with advertising. Yeah, this commercial with the freaking Budweiser puppy made me cr--er, chop onions--in its attempt to get me to buy beer, but I know they're trying to sell me beer, and I knew as soon as the puppy hit the screen that I'd probably start sniffling.
Who decides what experiments social scientists get to run on 1/7th of the world's population?
- Optimizing ad targeting
- Maximizing click-thru
- Maximizing engagement
- Minimizing abandonment/bounce rates
The News Feed already doesn't show you all your friends' posts and hasn't for quite some time. How they choose to "curate" what they do show is going to be dictated by their incentives/needs.Getting outraged about any of this seems akin to getting pissed that the new season of your favorite TV show sucked...
Edited for formatting...
Intentionally making a customer depressed is not something you fuck around with. It's incredibly dangerous, and doing it to a huge subset of your audience with no mechanisms to ensure you don't inflict real harm is utterly reckless and irresponsible.
The individual mechanisms and approach used here for the experiment are not, in and of themselves, objectionable. Many A/B tests are wholly defensible. The end goal and process are the problem here.
So all modern day advertising then, that attempts to create a connection with the viewer/reader (as opposed to presenting bare facts).
I agree that there was a process problem from the point of view of a research study but had this just been an A/B test for engagement levels, we might never have known what Facebook did.
[1] http://www.facebook.com/notes/facebook/calm-down-breathe-we-...
"While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then."
One thought I've had is that the blowback against this incident is less about the research itself and how ethical it is, and more about perception of Facebook in general. My suspicion is a lot of the opposition at this point comes from long-simmering distrust of Facebook and the increasingly negative perception of its brand - this incident is merely the straw that broke the camel's back, for some.
And if the popular response to this revelation reflects people's general views on Facebook, it's not good for the company.
http://www.talyarkoni.org/blog/2014/06/28/in-defense-of-face...
> by far the most likely outcome of the backlash Facebook is currently experience is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature
The sentiment is generally something like 'I use facebook because it is too inconvenient not to, but I don't like it', which is a far cry from the initial 'facebook is this cool new thing that I wish more of my friends would use instead of <insert usually shitty local social network>'.
In that light it makes sense for facebook to acquire up-and-coming business that compete with them, directly or indirectly, and I imagine there are quite a few people at the company who worry about this situation.
And in that light it is especially strange for facebook to release a study like this. What did they think would happen?
There have been quite a few instances over the past months (or years) that really made me wonder whether facebook's biggest problem, as a company, is that they're stuck in a bubble. A newsfeed that seems to be made for specific types of users, privacy kerfuffles, apps that don't seem to take off, and so on.
'Dogfooding' is generally a smart approach, but it doesn't seem like the optimal approach when your product relies on the whole world for its success...
And, without the benefit of hindsight, it is hard to see why people would get upset about this experiment. I reckon Facebook users are more upset by the truth, that their "personal" feelings are so influenced by trivial aspects of their environment, than by the way Facebook demonstrated it. The truth sometimes hurts, but science isn't to blame.
The issue here is that Facebook conducted behavioural experiments on participants whom were not informed that they were part of a study. It is unethical. Whilst the outcomes are tame for those involved, the shear number of those involved and Facebook's influence and presence in everyday life makes it all the more alarming that they attempted it in the first place.
Although, I find that is all the more concerning, since you would hope that the university in question would have more ethical clout than an corporation.
What if I'm Amazon or Yelp, and I want to choose review snippets? Is looking for emotionally charged ones and testing to see how that impacts users wrong?
What if it's more direct psychological manipulation? What if I run a productivity app, and I want to see how giving people encouraging tips, like 'Try starting the day by doing one thing really well.' impacts their item completion rate. I'm doing psychological experimentation. I'm not getting my users' permission. But I am helping them. And it's a valid question - maybe these helpful tips actually end up hurting users. I should test this behavior, not just implement it wholesale.
It seems like Facebook had a valid question, and they didn't know what the answer was. Did they go wrong when they published it in PNAS? Or was it wrong to implement the algorithm in the first place? I don't think it was.
If you're doing medical testing in order to roll something out as a pharmaceutical for prescription use or over-the-counter sales, the tests are rolled out in stages, with initial testing being done on a very small number of patients under extreme scrutiny, and even that is only done after the medication has been vetted carefully using animal models. It's extremely important to avoid harming your test subjects.
In comparison, they basically went full-steam on this experiment on hundreds of thousands of people despite the fact that emotional manipulation of this sort is EXTREMELY DANGEROUS. When I say extremely dangerous I mean potentially life-threatening.
What's been happening, over the last five years, is that American society has become more trigger-happy in deducing "accurate" moral conclusions from following online media outlets.
I outlined some of this in cjohnson.io/2014/context, although I didn't appreciate the full power of this conclusion at the time, and so the essay mostly falls short to explain the entirety of what's happening currently.
In a nutshell: The Web has broken down barriers between contexts that used to live in harmony, ignorant of each other. Now, as the incompatibility of these contexts come into full focus, society has no choice but to accept the fluidity of context in the information age, or tear itself apart at the seams.
All that was needed to precipitate the decline of Facebook (oh yes, Facebook is going down, short now while supplies last) was some combination of words and contexts that fully elucidate the power of online advertising / data aggregation to have real impact upon people's lives. Put in terms that the "average person" can understand, the impact of this story will be devastating. I feel so bad for the Facebook PR team -- they're simply out of their league here.
The reason this scandal will be the one we read about in the history books is because it provides the chain link between two separate, but very powerful contexts: 1, the context of Nazi-esque social experimentation, and 2, the run-of-the-mill SaaS-style marketing that has come to characterize, well, pretty much every large startup in the valley.
We've reached a point where nobody knows what is going to happen next. Best of luck, people.
That.. or getting users to click more ads.
Maybe ad inventory can be optimized for people in an certain emotional state, and the user's wall can be used to induce the most profitable state of mind for fb's available ad inventory. That would be awesome in an evil villain kind of way.
If what they did requires informed consent, what about when the end goal is maximizing click through rate, e.g. by inciting an emotional response?
Let's say FB finds that certain types of posts in the feed cause a nearby ad to be clicked on more. They determine this through pervasive testing of everything that you see and do on their site. They could then adjust their algorithm to account for this behavior to increase clicks/profit.
I think the actions FB takes to monetize the user base are not only more intrusive by far, they are actively searching for and exploiting these effects for profit. If informed consent for TFA is not ridiculous, then I think we have much bigger problems on our hands? What am I missing about the informed consent issue?
"Significance:
We show, via a massive (N=689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues."
"We're conducting a scientific study on mood for the next three months. Would you mind participating? The study won't require you to do anything special or take any further action. Y/N?"
There we go kids, easy as pie; it's called consent to participate! Pretty easy to do without spilling the beans on the study.
It did pass the muster of Cornell's review board, but that was after the data was actually collected, which amounts to ex post facto approval.
Messing with people's mental health is outrageous.
This particular experiment is getting scrutiny because they published the results in a journal and it is a rather egregious example (the hypothesis, and test methodology, both demonstrate a particular brand of recklessness you don't tend to see in website A/B testing.)
EDIT: For reference, I know a lot about this subject because I worked extensively at IMVU, one of the first consumer-facing websites to aggressively A/B test almost everything. I don't have a luddite perspective on this.
I have no problem claiming that this is usually unethical.
Changing a product you provide to a customer is often fine, though there could be contract or consumer-protection law or other legalities to consider. This may be true even when your product is free ("see a lawyer"). So changing your website and trying it on a handful of your audience first is also fine. Assuming no legal issues, there is a general understanding that fixes and improvements happen, and that the client can choose not to participate at any time, so that situation is probably fine as well.
Also intent counts for a lot - "trying to find a better search tool" or other technical features are clear in what they are intending to accomplish: bugfixes and/or new features.
The problem starts when you are trying to experiment on people directly, where the entire goal of the project is to poke at people and see how they react. After seeing how that kind of activity can end up in WW2, we decided it was a far better idea to put some precautions in place. It's annoying (and makes medical testing MUCH more expensive), but this is one of those situations where it's better to be overly cautious.
What I don't understand about FB is that - compared to experiments with SERIOUS risk such as drug testing - the experiment was rather benign. It shouldn't have been very difficult to get a proper IRB stamp of approval. The informed consent part could (I suspect) have been handled with some web page with an overview of the experiment and an opt-in button.
(and no, opt-in wouldn't have affected the test if you do you statistics correctly and are careful in your language on the opt-in page)
Failure to do this relatively easy steps is unprofessional at best, and highly suspicious at worst. Acting like human experimentation is not even worth of such protections makes me wonder what kind of person the experimenter is (stupid? or just badly narcissistic?).
Actually doing such an experiment without the subject consent supplies the answer: the experimenter is both stupid and dangerously narcissistic.
The dividing line, is largely: are you trying to do things to people behind their back? Or are you including them in the decision to participate (or not participate)?
For more specific details, see your local ethics committee.
http://inserbia.info/today/2014/04/gchq-nsa-infiltrate-socia...
The Intercept article concerning JTRIG actions on social networks :
https://firstlook.org/theintercept/2014/02/24/jtrig-manipula...
This study would fit pretty well in documenting possible manipulations.