Why cooldowns? Most npm (or pypi) compromises were taken down within hours, cooldowns simply mean - ignore any package with release date younger than N days (1 day can work, 3 days is ok, 7 days is a bit of an overkill but works too)
How to set them up?
- use latest pnpm, they added 1 day cooldown by default https://pnpm.io/supply-chain-security
- or if you want a one click fix, use https://depsguard.com (cli that adds cooldowns + other recommended settings to npm, pnpm, yarn, bun, uv, dependabot and, I’m the maintainer)
- or use https://cooldowns.dev which is more focused on, well, cooldowns, with also a script to help set it up locally
All are open source / free.
If you know how to edit your ~/.npmrc etc, you don't really need any of them, but if you have a loved one who just needs a one click fix, these can likely save them from the next attack.
Caveat - if you need to patch a new critical CVE, you need to bypass the cooldown, but each of them have a way to do so. In the past few weeks, while I don't have hard numbers, it seems more risk has come from Software Supply Chain attacks (malicious versions pushed) than from new zero day CVEs (even in the age of Mythos driven vulnerability discovery)
The only issue I see is responding to vulnerabilities, where you want to upgrade immediately. But I think in that case it's fine to require the developer to be explicit in the new version they want.
I added a “how to bypass if you have to patch a zero day CVE” section to depsguard for all supported package managers.
> Disclaimer: I maintain depsguard
Edit: added it back, inline.
Teams should be able to say "at least N developers have to agree to a release before it happens." This should be a policy they can control and lock down with a non developer account.
I think that npm can have its own cooldown and automated security scan. Socket.dev, StepSecurity both close a gap here by spending tokens to scan new popular packages. Whether they do it for marketing or out of the goodness of their heart, is irrelevant. They don’t charge for this service, and it’s something I’d expect Microsoft (who owns GitHub who owns npm) to do.
If every other week I would notice the FDA recalls a popular brand that would have taken over my brain and transmit my bank password and SSN to a stranger, I might prefer drinking week old milk.
Edit: not dismissing your analogy, it’s pretty much it.
Maven Central exists for decades the amount of incidents of people stealing namespaces is minimal.
One can't simply publish a package under the groupId "com.ycombinator" without having some way to verify that they own the domain ycombinator.com. Then, once a package is published, it is 100% immutable, even if it has malicious code in it. Certainly, that library is flagged everywhere as vulnerable.
It baffles me that NPM for so long couldn't replicate the same guardrails as Maven Central.
With many other languages, you have a lot of functionality out of the box. Certainly, there have been bugs and security issues, but they're a drop in the bucket compared to what you see in the JS ecosystem. With other languages, you have a much smaller external dependency graph and the core functionality is coming from a trusted 3rd party.
Attackers go where the victims are. Frontend is a monoculture with the vast majority using NPM; backend, less so. This isn't an excuse for NPM, but another strike against it.
You could also argue that the attacks make a deeper point about frontend vs backend devs, but I won't go there.
NPM's achilles is the pre/postinstall step which can run arbitrary commands and shell scripts without the user having any way to intervene.
Dependencies must be run in isolated chroot sandboxes or better, inside containers. That would be the only way to mitigate this problem, as the filesystem of the operating system must be separated from the filesystem of the development workflow.
On top of that most host based firewalls are per-binary instead of per-cmdline. That leads to the warnings and rules relying on that e.g. "python" or "nodejs" getting network access allowlisted, instead of say "nodejs myworm.js". So firewalls in general are pretty useless against this type of malware.
Your mismatch is that you think in policies, not assessments here. Nothing in my normal go workflow will ask me if I want to run "curl download whatever from the internet" when I run go build.
Though I agree with the difference in workflow, there is not a single mechanism in go catching this. go.mod files can be just patched by the worm, and/or hidden behind a /v123 folder or whatever to play shenanigans on API differences.
Examples that come to mind: webview/webview, webkit, cilium/ebpf and most other CGo projects that I have seen.
Something fascinating about the design and architecture of programming languages and their surrounding ecosystems is the enormous leverage that they provide to the "core team":
For every 1 core language developer[1]...
... there may be 1,000 popular package developers...
... for which there may be 1,000,000 developers writing software...
... for over 1,000,000,000 users.
This means that for every corner that is cut at the top of that pyramid, the harms are massively magnified at the lower tiers. A security vulnerability in a "top one thousand" package like log4j can cause billions of dollars in economic damage, man-centuries of remediation effort, etc.
However, bizarrely, the funding at the top two levels is essentially a pittance! Most such projects are charities, begging for spare change with hat in hand on a street corner. Some of the most used libraries are often volunteer efforts! cough-OpenSSL-cough.
The result is that the people most empowered to fix the issues are the least funded to do so.
This is why NPM, Crates.io, etc... flatly refuse to do even the most basic security checks like adding namespaces and verifying the identity of major publishers like Google, Microsoft, and the like.
That's a non-zero amount of effort, and no matter how trivial to implement technically and now cheap to police, it would likely blow their tiny budget of unreliable donations.
The exceptions to this rule are package-managers with robust financial backing, such as NuGet, which gets reliable funding from Microsoft and supports their internal (for-profit!) workflows almost as much as it does external "free" users.
"Free and open" is wonderful and all, but you get what you pay for.
[1] Most of us can name them off the top of our heads: Guido van Rossum, Larry Wall, Kerningham & Richie, etc.
This is definitely going to affect any packages that need to link to native code and/or compile shims, but these are very few.
https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2024...
[0]: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
> residents of the Node.js ecosystem stood unified in their belief that the malicious remote-code execution was a completely unpredictable tragedy
Does anyone believe that claim? There's been so many counterexamples.
It's a great dig on the ecosystem's failings but only entertainment. Perhaps a prompt for marketers to present their wares? Kinda like the maintainer of depsguard who removed, re-added, and then re-removed that admission from their post? At the time of this writing they have the top post.
This mitigates things to a great extent.
I do not know who thought that having your dependencies depend on the internet with a zillion users doing stuff to each package was a good idea for enterprise environments...
It is crazy how much things can get endangered this way.
RubyGems: https://www.sonatype.com/blog/anatomy-of-the-rubygems-rest-c... PyPi: literally the latest attack included publishing malicious packages on PyPi XZ Tools, a part of nearly every Linux distribution nearly merged in code to backdoor SSH: https://www.akamai.com/blog/security-research/critical-linux...
It is just easy pickings to blame npm specifically. Yes, while they do share some part of the blame, no package manager is immune from attack and certainly not ones where the attackers exploited being able to extract out secrets from a developer's environment variables or files. Seems more like developers should be managing their secrets better?
I also find that using the meme that this title snowclones is in bad taste too.
Different order of magnitude effort spent during XZ attack.
In fact, pip is much more dangerous than npm because it lacks a lockfile. uv fixes that, but adoption is proceeding at a snail’s pace.
In JS world there is plenty of competition for package managers pnpm/ yarn/ burn all viable alternatives to npm the package manager.
Public registries for languages tend to coalesce around one service . Nobody wants to publish their library to 4 different registries .
https://pip.pypa.io/en/stable/cli/pip_lock/
But who cares about pip, uv is here.
The other one a few days ago was also good: https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes...
But young blood mocked the fact to have to wait for Manual human review, safe gpg signatures, cool down periods and weeks of "testing" stage before being considered "stable".
And now most companies data are leaked and on the wide, hackers and ransomware are thriving...
This is crazy when you think about it because after so many years of software dev crafting experience, "modern safe" languages like go and rust, ..., typing, ... You would expect most software stack to be pretty solid and safe compare to 15 years ago.