That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"
(3). We send the incorrect minor patches to the Linux community through email to seek their feedback.
(4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
------------------------
But this shows a distinct lack of understanding of the problem:
> This is not ok, it is wasting our time, and we will have to report this,
> AGAIN, to your university...
------------------------
You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:
1. The voluntary consent of the human subject is absolutely essential.
https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...
I agree this whole thing paints a really ugly picture, but it seems to validate the original concerns?
Edit: I am not defending the researchers who may have misled the IRB, or the IRB who likely have little understanding of what is actually happening
* Is this human research? This is not considered human research. This project studies some issues with the patching process instead of individual behaviors, and we did not collect any personal information. We send the emails to the Linux community and seek community feedback. The study does not blame any maintainers but reveals issues in the process. The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained).
Exactly this. Research involving human participants is supposed to have been approved by the University's Institutional Review Board; the kernel developers can complain to it: https://research.umn.edu/units/irb/about-us/contact-us
It would be interesting to see what these researches told the IRB they were doing (if they bothered).
Edited to add: From the link in GP: "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)"
Okay so this IRB needs to be educated about this. Probably someone in the kernel team should draft an open letter to them and get everyone to sign it (rather than everyone spamming the IRB contact form)
T
> IRB exempt was issued
If this behaviour is tolerated by the University of Minnesota (and it appears to be so) then I suppose that's another institution on my list of unreliable research.
I do wonder what the legal consequences are. Would knowingly and willfully introducing bad code constitute a form of vandalism?
from Lu's list of publications at https://www-users.cs.umn.edu/~kjlu/
Seems like a conference presentation at IEEE at minimum?
Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?
From a common sense standpoint, it seems to me this is more about medical experiments. Yesterday I put some of my kids toys away without telling them to see if they’d notice and still play with them. I don’t think I need IRB approval.
It is unlikely that a business conducting A/B testing or a parent interacting with their children are receiving federal funds to support it. Therefore, their work is not subject to IRB review.
Instead, if you are a researcher who is funded by federal funds (even if you are doing work on your own children), you have to receive IRB approval for any work involving human interaction before you start conducting it.
Potentially yes, actually.
I still think it should be possible to run some A/B tests, but a lot depends on the underlying motivation. The distance between such tests and malicious psychological manipulation can be very, very small.
Psychology and sociology are both subject to the IRB as well.
Regardless of their department, this feels like a psychology experiment.
I would argue that ordinary A/B tests, by their very nature, are not "experiments" in the sense that restriction is intended for, so there is no reason for them to be considered unethical.
The difference between an A/B test and an actual experiment that should require the subjects' consent is that either of the test conditions, A or B, could have been implemented ordinarily as part of business as usual. In other words, neither A nor B by themselves would need a prior justification as to why they were deployed, and if the reasoning behind either of them was to be disclosed to the subjects, they would find them indistinguishable from any other business decision.
Of course, this argument would not apply if the A/B test involved any sort of artificial inconvenience (e.g. mock errors or delays) applied to either of the test conditions. I only mean A/B tests designed to compare features or behaviours which could both legitimately be considered beneficial, but the business is ignorant of which.
Assuming this isn't being asked as a rhetorical question, I think that's exactly what turned the now infamous Facebook A/B test into a perceived unethical mass manipulation of human emotions. A lot of folks are now justifiably upset and skeptical of Facebook (and big tech) as a result.
So to answer your question: yes, if that test moves into territory that would feel like manipulation once the subject is aware of it. Maybe especially so because users are conceivably making a /choice/ to use said product and may switch to an alternative (or simply divest) if trust is lost.
Researchers at a company could arguably be deemed as engaging in unethical research and barred from contributing to the scientific community due to unethical behavior. Even doing experiments on your kids may be deemed crossing the line.
The question I have is when does it apply. If you research on your own kids but never publish, is it okay? Does the act of attempting to publish results retroactively make an experiment unethical? I'm not certain these things have been worked out because of how rare people try to publish anything that wasn't part of an official experiment.
I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.
If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.
As I understand it, any "experiment" involving other people that weren't explicitly informed of the experiment before hand needs to be a lot more carefully considered than what they did here.
> I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.
> These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.
( https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )
How does that fit in with your explanation?
They did not say that they were hoping for feedback on their tool when they submitted the patch, they lied about their code doing something it does not.
>How does that fit in with your explanation?
It fits in the narrative of doing hypocritical changes to the project.
They obviously were _NOT_ created by a static analysis tool that is of
any intelligence, as they all are the result of totally different
patterns, and all of which are obviously not even fixing anything at
all. So what am I supposed to think here, other than that you and your
group are continuing to experiment on the kernel community developers by
sending such nonsense patches?
When submitting patches created by a tool, everyone who does so submits
them with wording like "found by tool XXX, we are not sure if this is
correct or not, please advise." which is NOT what you did here at all.
You were not asking for help, you were claiming that these were
legitimate fixes, which you KNEW to be incorrect.Sounds like they knew exactly what they were doing.
> 1. The voluntary consent of the human subject is absolutely essential.
The Nuremberg code is explicitly about medical research, so it doesn't apply here. More generally, I think that the magnitude of the intervention is also relevant, and that an absolutist demand for informed consent in all - including the most trivial - cases is quite silly.
Now, in this specific case I would agree that wasting people's time is an intervention that's big enough to warrant some scrutiny, but the black-and-white way of some people to phrase this really irks me.
PS: I think people in these kinds of debate tend to talk past one another, so let me try to illustrate where I'm coming from with an experiment I came across recently:
To study how the amount of tips waiters get changes in various circumstances, some psychologists conducted an experiment where the waiter would randomly either give the guests some chocolate with the bill, or not (control condition)[0] This is, of course, perfectly innocuous, but an absolutist claim about research ethics ("You do not experiment on people without their consent.") would make research like this impossible without any benefit.
[0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1816...
For that matter, what's the difference between this and pentesting?
Also, IRB review is only for research funded by the federal government. If you’re testing your kid’s math abilities, you’re doing an experiment on humans, and you’re entirely responsible for determining whether this is ethical or not, and without the aid of an IRB as a second opinion.
Even then, successfully getting through the IRB process doesn’t guarantee that your study is ethical, only that it isn’t egregiously unethical. I suspect that if this researcher got IRB approval, then the IRB didn’t realize that these patches could end up in a released kernel. This would adversely affect the users of billions of Linux machines world–wide. Wasting half an hour of a reviewer’s time is not a concern by comparison.
Usually when an organization is pen-tested it consented to being pen-tested (likely even requesting it).
Here there were no contact with the Linux foundation to gain consent for the experiment.
I wonder how many zero days have been included already, for example by nation state actors...
If I were at the receiving end, I’d think checking a patch multiple times before accepting it.
And further, pretty much everybody knows that malicious actors - if they tried hard enough - would be able to sneak through hard to find vulns.
And this is anything new?
And if I blow a hammer over your head while you are not suspecting it, does this prove anything else than that I am thug? Does it help you? Honestly?
>1. The voluntary consent of the human subject is absolutely essential.
Does this also apply to scrapping people's data?
By this logic eg. resume callback studies aiming to study bias in the workforce would be impossible.
> 1. The voluntary consent of the human subject is absolutely essential.
Which is rather useless, as for many experiments to work, participants have to either be lied to, or kept in the dark as to the nature of the experiment, so whatever “consent” they give is not informed consent. They simply consent to “participate in an experiment” without being informed as to the qualities thereof so that they truly know what they are signing up for.
Of course, it's quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations, and they never consented to that, but the hospitals and physicians that partake in that are not sanctioned as it's “tradition”.
Know well that so-called “human rights” have always been, and shall always be, a show of air that lack substance.
Fascinating. Can you provide links?
Like somebody picking your locks, and suggesting, 'to stop this one approach would be to post a sign "do not pick"'
from https://www-users.cs.umn.edu/~kjlu/
If the original research results in a paper and IEEE conference presentation, why not? There's no professional consequences for this conduct, apparently.
And insists it was not human research. [1]
How can this type of people be professors?
[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
These ~200 patches from UMN being reverted might have nothing to do with these researchers at all.
Hopefully someone from the university clarifies what's happening soon before the angry mob tries to eat the wrong people.