[1] https://grants.nih.gov/policy-and-compliance/policy-topics/h...
This seems like one of those situations that would usually require regular review to err on the side of caution if nothing else. It's worth pointing out there are exceptions though:
https://grants.nih.gov/sites/default/files/exempt-human-subj...
Generally those exceptions fall into "publicly observable behavior", which I guess I could see this falling into?
It's ethically unjustified how the whole thing actually happened but I guess I can see an IRB coming to an exemption decision. I would probably disagree with that decision but I could see how it would happen.
In some weird legalistic sense I can also see an IRB exempting it because the study already happened and they couldn't do anything about it. It's such a weird thing to do and IRBs do weird things sometimes.
I mean I feel like the IRB is mostly dealing with medical stuff. "I want to electrocute these students every week to see if it cures asthma". "No that's too much.. every other week at most". "Great I'll charge up the electrodes"
So if a security researcher rolls in after the fact and says "umm yea so this has to do with nerd stuff, computers and kernels, no humans, and I just want it all to be super secure and nobody gets hacked, sound good" "ok sure we don't care if no people are involved and don't really understand that nerd stuff, but hackers bad and you're fighting hackers"
Yes, they were. What kind of argument is this? If you submit a PR to the kernel you are explicitly engaging with the maintainer(s) of that part of the kernel. That's usually not more than half a dozen people. Seems pretty specific to me.
I assure you that it falls under IRB's purview -- I came into the thread intending to make grandparent's comment. When using deception in a human subjects experiment, there is an additional level of rigor -- you usually need to debrief the participant about said deception, not wait for them to read about it in the press.
(And if a human is reviewing these patches, then yes, it is human subjects research.)
Yes, if in the course of that experimentation, you also shipped potentially harmful products to buyers of those products "to see if Amazon actually let me".
There are cases where deception (as they call it) can be approved (even by ethics boards). Based on the Verge's article, this research setup should not have been approved even by then. But the topic itself seems as relevant as ever with the xz case and all.
I reported my advisor to university admin for gross safety violations, attempting to collect data on human subjects without any IRB oversight at all, falsifying data, and falsifying financial records. He brought his undergrad class into the lab one day and said we should collect data on them, (low hanging fruit!) with machinery that had just started working a few days prior, we hadn't even begun developing basic safety features for it, we hadn't even discussed design of experiments or requesting IRB approval for experiments. We (grad students) cornered the professor as a group and told him that was wildly unacceptable, and he tried it multiple more times before we reported him to university admin. Admin ignored it completely. In the next year, we also reported him for falsifying data in journal papers and falsifying financial records related to research grants. And, oh yeah, assigning Chinese nationals to work on DoD-funded work that explicitly required US citizens and lying to the DoD about it. University completely ignored that too. And then he got tenure. I was in a Top-10-US grad program. So in my experience, as long as the endowment is growing, university admin doesn't care about much else.
It's like if my wife said "I'm taking the car to get it washed" and then she actually takes the car to the junkyard and sells it. "Ha, you got fooled!". I mean, yes, obviously. She's on the inside of my trust boundary and I don't want to live a life where I'm actually operating in a way immune to this 'exploit'.
I get that others object to the human experimentation part of things and so on, but for me that could be justified with a sufficiently high bar of utility. The problem is that this research is useless.
1. Prof and students make fake identities
2. They submit these secret vulns to Greg KH and friends
3. Some of these patches are accepted
4. They intervene at this point and reveal that the patches are malicious
5. The patches are then not merged
6. This news comes out and Greg KH applies big negative trust score to umn.edu
7. Some other student submits a buggy patch to Greg KH
8. Greg KH assumes that it is more research like this
9. Student calls it slander
10. Greg KH institutes policy for his tree that all umn.edu patches should be auto-rejected and begins reverts for all patches submitted in the past by such emails
To be honest, I can't imagine any other such outcome could have occurred. No one likes being cheated out of work that they did, especially when a lot of it is volunteer work. But I was wrong to say the research was useless. It does demonstrate that identities without provenance can get malicious code into the kernel.
Perhaps what we really need is a Social Credit Score for OSS ;)
Also, banning umn.edu email addresses didn't even make sense since the hypocrite commits were all from gmail addresses.
The blanket ban was kicked off by another incident after the hypocrite commit incident.
I believe most people believe that the Linux kernel couldn’t be compromised because there is multiple approval process and highly professional people vetoing.
It seems like a big vulnerability, if a teacher assistant could do that, there is no doubt that government agencies can too.
I had to apply for exemptions often in grad school. You must do so before performing the research -- it is not ethical to wait for outcry then apply after the fact. Any well run CS department trains it's incoming students on IRB procedures during orientation, and Minnesota risks all federal funding if they continue to allow researchers to operate in this manner.
(Also "exempt" usually refers to exempt from the more rigorous level of review used for medical experiments -- you still need to articulate why your experiment is exempt to avoid people just doing whatever they want then asking for forgiveness after the fact)
This level of malfeasance strikes me as something akin to plagiarism for a professional writer.
More to the point; are they salty because the author has possibly proved that it's most certainly possible to get critical flaws into the Linux kernel with social engineering? How else is something like that meant to be tested?
If you give them a heads-up they'll pay more attention for a short duration of time.
But there is always the BSDs.
Money is money and buys time, no harm done, useful research conducted, and a whole lot of publicity gained.
That says a lot about Linux kernel safety.