> Because of this, I will now have to ban all future contributions from your University.
Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.
In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].
Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.
[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22
[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...
New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"
Notify someone up the chain that you want to submit malicious patches, and ask them if they want to collaborate.
If your patches make it through, treat it as though they essentially just got red teamed, everyone who reviewed it and let it slip gets to have a nervous laugh and the commit gets rejected, everyone having learned something.
From the looks of it they didn't even when it was heading out to stable releases?
That's just using the project with no interest in not causing issues.
Yeah, for one thing, to be a good analogy, rather than lockpicking without entering when he’s not home and leaving a note, you’d need to be an actual service worker for a trusted home service business and use that trust to enter when he is home, conduct sabotage, and not say anything until the sabotage is detected and traced back to you and cited in his cancelling the contract with the firm for which you work, and then cite the “research” rationale.
Of course, if you did that you would be both unemployed and facing criminal charges in short order.
He shouldn't be surprised if it has some unexpected consequences to his own personal security, like some unknown third parties porting away his phone number(s) as a social engineering test, pen testing his office, or similar.
That's the university's problem to fix.
Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.
Given that the professor appears to be a frequent flyer with this, the kernel folks banning him and the university prohibiting him from using Uni resources for anything kernel related seems reasonable and gets the point across.
There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.
It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.
All you have to do is look at the reverted patches to see that these are either mythical or at least few and far in between.
This analogy is invalid, because:
1. The experiment is not on live, deployed, versions of the kernel.
2. There are mechanisms in place for preventing actual merging of the faulty patches.
3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
All of the above is not true for the in-flight airline.
However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.
The patches were merged and the email thread discusses that the patches made it to the stable tree. Some (many?) distributions of Linux have and run from stable.
> 2. There are mechanisms in place for preventing actual merging of the faulty patches.
Those mechanisms failed.
> 3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
Arguably. But I think this is a weak argument.
It's irrelevant whether any bugs were ultimately introduced into the kernel. The fact is the researchers deliberately abused the trust of other human beings in order to experiment on them. A ban on further contributions is a very light punishment for such behavior.
The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.
I like that. That's what makes universities interesting to me.
I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.
I know the kernel doesn't need anyone's contributions anyhow, but as a matter of policy this seems like a bad one.
The damage is not that big. Only 4 committers to linux in the last decade, 2 of them, the students, with malicious backdoors, the Prof not with bad code but bad ethics, and the 4th, the Ass Prof did good patches and already left them.
For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?
Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?
It stays with the university until the university provides a good reason to believe they should not be particularly untrusted.
It's a chain that gets really unpleasant.
if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes
If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.
It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
(All of this ASSUMING that the intent was as described in the thread.)
I worked on a number of studies through undergrad and grad school, mostly involving having people test software. The work to get a study approved was easily 20 hours for a simple "we want to see how well people perform tasks in the custom software we developed. They'll come to the university and use our computer to avoid security concerns about software security bugs". You needed a script of everything you would say, every question you would ask, how the data would be collected, analyzed, and stored securely. Data retention and destruction policies had to be noted. The key linking a person's name and their participant ID had to be stored separately. How would you recruit participants, the exact poster or email you intend to send out. The reading level of the instructions and the aptitude of audience were considered (so academic mumbo jumbo didn't confuse participants).
If you check the box that you'll be deceiving participants, there was another entire section to fill out detailing how they'd be deceived, why it was needed for the study, etc. Because of past unethical experiments in the academic world, there is a lot of scrutiny and you typically have to reveal the deception in a debriefing after the completion of the study.
Once a study was accepted (in practice, a multiple month process), you could make modifications with an order of magnitude less effort. Adding questions that don't involve personal information of the participant is a quick form and an approval some number of days later.
If you remotely thought you'd need IRB approval, you started a conversation with the office and filled out some preliminary paperwork. If it didn't require approval, you'd get documentation stating such. This protects the participants, university, and professor from issues.
--
They took it really seriously. I'm familiar with one study where participants would operate a robot outside. An IRB committee member asked what would happen if a bee stung the participant? If I remember right, the resolution was an epipen and someone trained in how to use it had to be present during the session.
Low trust and negative trust should be fairly obvious costs to messing with a trust model - you could easily argue this is working as intended.
The objectionable part is that the group allegedly continued after having been told to stop by the kernel developers.
They are conducting research to demonstrate that it is easy to introduce bugs in open source...
(whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)
[removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]
> They are conducting research to demonstrate that it is easy to introduce bugs in open source...
That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.
(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)
> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.
The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.
"Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"
and
"We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"
is the difference between white-hat and black-hat.
Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.
To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.
This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.
I fixed my sentence.
I still think that these professors, either genuinely or by lack of willingness, do not understand the mechanism by which free software warrants its greater quality compared to proprietary ones (which is a fact).
They just remind me the good old days of FUD against open source by Microsoft and its minions...
I'll just leave my comment as it is. The university administration still bears responsibility in the fact that they waived the IRB.
That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.
Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.
Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?
Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!
A maintainer pakage is just one more open source software (thus also in need of reviews and audits)... which is why some people prefer upstream-source-based distribs, such as Gentoo, Arch when you use git-based AUR packages, or LFS for the hardcore fans.