Well, I posted this as an RFC for other runc maintainers and contributors, I didn't expect it to get posted to Hacker News. I don't particularly mind hearing outsiders' opinions but it's very easy for things to get sidetracked / spammy if people with no stake in the game start leaving comments. My goal with the comment about "don't be spammy" was exactly that -- you're free to leave a comment, just think about whether it's adding to conversation or just looks like spam.
> Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
I haven't taken a very deep look, but from what I've seen, the most common setups are "blanket ban" and "blanket approval". After thinking about this for a few days, I'm starting to lean more towards:
1. LLM use must be marked as such (upfront) so maintainers know what they are dealing with, and possibly to (de)prioritise it if they wish.
2. Users are expected to (in the case of code contributions) have verified that their code is reasonable and they understand what it does, and/or (in the case of PRs) to have verified that the description is actually accurate.
Though if we end up with such a policy we will need to add AGENTS.md files to try to force this to happen, and we will probably need to have very harsh punishments for people who try to skirt the requirements.> Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
AFAICS, it's because of copyright concerns? I did mention it in my initial comment, but I think that far too much of our industry is turning a blind eye to that issue that focusing on that is just going to lead to drawn out arguments with people cosplaying as lawyers (badly). I think that even absent of the obvious copyright issues, it is not possible to honestly sign the Developer Certificate of Origin[1] (a requirement to contribute to most Linux Foundation projects) so AI PRs should probably be rejected on that basis alone.
But again, everyone wants to discuss the utility of AI so I thought that was the simplest thing to start the discussion with. Also the recent court decisions in the Meta and Anthropic cases[2] (while not acting as precedent) are a bit disheartening for those of us with the view that LLMs are obviously industrial-grade copyright infringement machines.
[1]: https://developercertificate.org/ [2]: https://observer.com/2025/06/meta-anthropic-fair-use-wins-ai...