One thing I'm less happy about is how these sort of projects always tend to build up a whole parallel universe, dragging along a whole suite of dependencies and related projects (Cosign, Rekor, Fulcio, etc.)
I understand why we might want to fill gaps in existing open source tools, but it makes adopting these platforms a massive migration effort, where I need to go to several project's documentation to learn how everything works. Naming wise, I would also much prefer boring, descriptive names over the modern fancy project names.
[0]: https://security.googleblog.com/2022/04/improving-software-s...
[1]: https://github.blog/2022-04-07-slsa-3-compliance-with-github...
https://docs.microsoft.com/en-us/archive/blogs/ieinternals/c...
> the signature blocks themselves can contain data. This data isn’t validated by the hash verification process, and while it isn’t code per-se, an executable with such data could examine itself, find the data, and make use of it
I suppose the people you trust to audit some code will likely not be the same people you trust to do build verification for you, but it might be nice to manage those trust relationships in a single UI/config.
To be honest, crev is pretty elegant but I find manual code review like this to be pretty ineffective in stopping attacks.
Is sigstore relevant only for signing Linux distributions, or do you see it being relevant for language specific package managers, like rubygems/npm/pip/...?
Short of those two, it just becomes a way to maintain walled gardens by app stores or a means of replacing opensource gpg package signing with centralized web-of-trust? I guess the cosign part means some decentralization like GPG ? I am not bashing it, it can help with Supply chain attacks, but I predict adoption woes and being used by malicious actors a lot without those two items. Is Firefox signed by Mozilla legit or is Firefox signed by Mozilla Corporation legit?
Given the work they are (ironically) doing on open source supply chain security[0], it would be embarrassing if they didn't end up implementing something similar for apps in the Windows Store.
> 2) It doesn't mean much without developer ID verification and financial cost
Even without verifying an ID, tools will be able to accumulate trust in long-standing identities, and flag when you are installing a package made by an identity that no one has ever heard of (which could be a sure sign of a typosquatting attack[1], for example).
You're right, though, that in some reductionist sense, "all we're doing" is moving the trust problem from binaries to (source code to reviews/audits to) pseudonymous digital identities. Closing the gap between those identities and the legal system is a cultural/political question that needs to be thought about separately, but I do think that having a decentralised web-of-trust system would greatly increase the cost for attackers and make attacks significantly less frequent.
[0] https://news.ycombinator.com/item?id=27930594
[1] https://www.theregister.com/2017/08/02/typosquatting_npm/
My point being that sandboxing, etc. would not have helped you at all.
In other words, kudos?
(I am pretty bad at finding things after a long days work of development)