One thing I'm less happy about is how these sort of projects always tend to build up a whole parallel universe, dragging along a whole suite of dependencies and related projects (Cosign, Rekor, Fulcio, etc.)
I understand why we might want to fill gaps in existing open source tools, but it makes adopting these platforms a massive migration effort, where I need to go to several project's documentation to learn how everything works. Naming wise, I would also much prefer boring, descriptive names over the modern fancy project names.
[0]: https://security.googleblog.com/2022/04/improving-software-s...
[1]: https://github.blog/2022-04-07-slsa-3-compliance-with-github...
https://docs.microsoft.com/en-us/archive/blogs/ieinternals/c...
> the signature blocks themselves can contain data. This data isn’t validated by the hash verification process, and while it isn’t code per-se, an executable with such data could examine itself, find the data, and make use of it
I suppose the people you trust to audit some code will likely not be the same people you trust to do build verification for you, but it might be nice to manage those trust relationships in a single UI/config.
To be honest, crev is pretty elegant but I find manual code review like this to be pretty ineffective in stopping attacks.
I think there should also be a culture of ensuring that a new patch release of some software passes the acceptance tests of the previous patch release (without changing or removing the tests).
A similar test for linting rules should also help (especially if those rules are designed to prevent Unicode homoglyph attacks), and a check for new uses of dangerous APIs like filesystem or network access would assist reviewers too.
Of course there is almost unlimited potential for underhanded code, if an attacker is skilled and patient enough to carefully introduce subtle bugs over time, but I think that a meaningful number of attacks could be avoided with these measures in place.
An ecosystem of useful bots would seem like a natural addition to it.
Is sigstore relevant only for signing Linux distributions, or do you see it being relevant for language specific package managers, like rubygems/npm/pip/...?
Short of those two, it just becomes a way to maintain walled gardens by app stores or a means of replacing opensource gpg package signing with centralized web-of-trust? I guess the cosign part means some decentralization like GPG ? I am not bashing it, it can help with Supply chain attacks, but I predict adoption woes and being used by malicious actors a lot without those two items. Is Firefox signed by Mozilla legit or is Firefox signed by Mozilla Corporation legit?
Given the work they are (ironically) doing on open source supply chain security[0], it would be embarrassing if they didn't end up implementing something similar for apps in the Windows Store.
> 2) It doesn't mean much without developer ID verification and financial cost
Even without verifying an ID, tools will be able to accumulate trust in long-standing identities, and flag when you are installing a package made by an identity that no one has ever heard of (which could be a sure sign of a typosquatting attack[1], for example).
You're right, though, that in some reductionist sense, "all we're doing" is moving the trust problem from binaries to (source code to reviews/audits to) pseudonymous digital identities. Closing the gap between those identities and the legal system is a cultural/political question that needs to be thought about separately, but I do think that having a decentralised web-of-trust system would greatly increase the cost for attackers and make attacks significantly less frequent.
[0] https://news.ycombinator.com/item?id=27930594
[1] https://www.theregister.com/2017/08/02/typosquatting_npm/
My point being that sandboxing, etc. would not have helped you at all.
If you are relying on detecting behaviour, then you have to run it.
NotPetya did nothing abnormal until it was triggered by the response to a normal network call. The first opportunity to block it would be when it was triggered.
So you could not have blocked the install by this method.
You can detect likely malicious behavior and contain those systems, which would have helped.
In other words, kudos?
(I am pretty bad at finding things after a long days work of development)