At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.
(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).
The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.
NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.
However, it's worth noting that you don't (necessarily) need `pull_request_target` for the OIDC credential in a private repo: all first-party PRs will get it with the `pull_request` event. You can configure the subject for that credential with whatever components you want to make it deterministic.
> This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.
Which is comical given how easily secrets were exilfiltrated.
GitHub has written a series of blog posts[1] over the years about "pwn requests," which do a great job of explaining the problem. But the misleading documentation persists, and has led to a lot of user confusion where maintainers mistakenly believe that any use of `pull_request_target` is somehow more secure than `pull_request`, when the exact opposite is true.
[1]: https://securitylab.github.com/resources/github-actions-prev...
Remember the python packages that got pwned with a malicious branch name that contained shellshock like code? Yeah, that incident.
I blogged about all vulnerable variables at the time and how the attack works from a pentesting perspective [1].
[1] https://cookie.engineer/weblog/articles/malware-insights-git...
In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.
That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.
An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.
Shameless plug, that is why we built Stagex. https://stagex.tools https://codeberg.org/stagex/stagex/ (Don't worry, not selling anything, it is and will always be 100% free to the public)
As mentioned in the RFC discussion, the major blocker with this is the lack of an ability for contributors to sign from mobile devices. Currently, building tooling for mobile devices is way out-of-scope for nixpkgs, and would be a large time sink for very little gain over what we have now. Further, while I sign my commits because I believe it is a good way to slightly increase the provenance of my commits, there is nothing preventing me from pushing an unsigned commit, or a commit with an untrusted key, and that's, in my opinion, fine. While for a project like Stagex(which as a casual cybersecurity enthusiast and researcher, I thoroughly appreciate the security work you all do), this layer of security is important, as it's clearly part of the security posture of the project, nixpkgs takes a different view to trustworthiness. While I disagree with your conclusion that having this sort of security measure would "make volunteers run screaming", I would be interested in seeing statistics on the usage of these mechanisms in nixpkgs already. Nixpkgs is also definitely not focused on being a hobby distro, considering it's in use at many major companies around the world(just look at NixCon 2025's sponsor list).
To be clear, this isn't to say that all security measures are worthless. Enabling more usage of security features is a good thing, and it's something I know folks are looking into(but I'm not going to speak for them), so this may change in the future. However, I do agree with the consensus that for nixpkgs, enabling commit signing would be very bad overall for the ecosystem, despite the advantages of them. Also, I didn't see anything in your PR about "independent signed reproducible builds", but for a project the size of nixpkgs, this would also be a massive infrastructure undertaking for a 3rd-party, though NixOS is very close to being fully reproducible(https://reproducible.nixos.org/) at the moment, we're not there yet though.
In conclusion, while I agree that signing commits would a good improvement, the downsides for nixpkgs are significant enough that I don't believe it would be a good move. It's something to definitely continue thinking about as nixpkgs and nix continue to refine and work on their security practices, though. I would also love some more information about how Stagex does two-party hardware signing, as that sounds like something interesting as well. Thank you so much!
Edit: Also, want to be very clear: I am not saying you're entirely wrong, or trying to disparage the very interesting and productive work that Stagex is doing. However, there were some (what I felt were)misconceptions I wanted to clean up.
> in nixpkgs that would have allowed us to pwn pretty much the entire nix ecosystem and inject malicious code into nixpkg
OP provided a mechanism to stymie the attack. The counter from your position needs to be how the nix project otherwise solves this problem, not “this isn’t the right approach for hand wavy reasons”. Given the reasonings stated, OP has convinced me that Nix isn’t actually serious about security as this should be treated as an absolutely critical vulnerability that has several hardening layers wrapped to prevent such techniques.
Do you mean a significant number of nixpkgs contributors make nixpkgs PRs from their phones... via the github web editor?
That seems weird to me at face value... editing code is hard enough on a phone, but this is also for a linux distro (definitely not a mobile os today), not a web app or something else you could even preview on your phone.
Edit: Per https://docs.github.com/en/authentication/managing-commit-si... the web editor can/does sign commits...
Lack of supply chain integrity controls as a means to reduce contribution friction to maximize the number of packages contributed is a perfectly valid strategy for a workstation distribution targeted at hobby developers.
Volunteers can do what they want, so that RFC convinced me stagex needed to exist for high security use cases, as Nix was explicitly not interested in those.
This is all fine. The reason I speak in a tone of frustration whenever Nix comes up is because as a security auditor I regularly see Nix used to protect billions of dollars in value or human lives. Sysadmins uneducated on supply chain integrity just assume Nix does security basics and has some sort of web of trust solution like even OG distros like Debian, but that is just not the case.
Nix maintainers did not ask to be responsible for human lives and billions in value, but they are, and people will target them over it. I am afraid this is going to get people hurt.
https://github.com/jlopp/physical-bitcoin-attacks
Nix choosing low supply chain security to maximize the total number of packages endangers themselves and others every time someone ignorantly deploys nix for high value applications.
If nix chooses to maintain their status quo of no commit signing, no review signing, no developer key pinning, and no independent reproducible build signing, they need to LOUDLY warn people seeking to build high risk systems about these choices.
Even those basic supply chain controls which we use in stagex, are nowhere near enough, but they are the bare minimum for any distro seeking to be used in production.
Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.
It's not used by anyone because nobody actually gives a shit about security, the entire industry is basically a grift.
But then in a pull request, the CI/CD pipeline actually runs untrusted code.
Getting this distinction correct 100% of the time in your mental model is pretty hard.
For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.
To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.
I don't believe this is fair. "Don't run untrusted code" is what it comes down to. Don't trust test suites or scripts in the incoming branch, etc.
That pull_request_target workflows are (still) privileged by default is nuts and indeed a footgun but no need for "almost impossible" hysteria.
>> It is not possible for xargs to be used securely
However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.
> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file
So git allows committing soft links. So the issue above could affect almost any workflow.
The more I rely on, the more problems I’ll inevitably have to deal with.
I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.
Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.
I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.
The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.
This is also why signing code commits isn't a solution, only a way to trace ends when something fucks up.
Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.
If you have to opt in to safe usage at every turn, then it's an unsafe way of doing things.
(This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)
For sure though, this can get tricky, but I am not really aware of an alternative. :/ Since the calling convention is just an array of strings, there is no generic way to handle this without knowing what program you are calling and how it handles command line. This is not specific to xargs...
Well, I guess FFI would be a way, but it seems like a major PITA to have to figure out how to call a golang function from bash shell just to "call" a program.
They were able to dump arbitrary file to logs. The secrets were automatically obfuscated with *** in the logs. How could they exfiltrate the token?
But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).