story
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local. (Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)
It’s clear from the structure and commit history they’ve been working their asses off to make it better, but when you’re standing at the bottom of a well of suck it takes that much work just to see daylight.
The last time I chimed in on this I hypothesized that there must have been a change in management on the npm team but someone countered that several of the maintainers were the originals. So I’m not sure what sort of Come to Jesus they had to realize their giant pile of sins needed some redemption but they’re trying. There’s just too much stupid there to make it easy.
I’m pretty sure it still cannot detect premature EOF during the file transfer. It keeps the incomplete file in the cache where the sha hash fails until you wipe your entire cache. Which means people with shit internet connections and large projects basically waste hours several times a week doing updates that fail.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
> Does not default to running post-install scripts (must manually approve each)
To get equivalent protection, use `--only-binary=:all:` when running `pip install` (or `uv pip install`). This prevents installing source distributions entirely, using exclusively pre-built wheels. (Note that this may limit version ability or even make your installation impossible.) Python source packages are built by following instructions provided with the package (specifying a build system which may then in turn be configured in an idiosyncratic way; the default Setuptools is configured using a Python script). As such, they effectively run a post-install script.
(For PAPER, long-term I intend to design a radically different UI, where you can choose a named "source" for each package or use the default; and sources are described in config files that explain the entire strategy for whether to use source packages, which indexes to check etc.)
> Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
Pip does not support this; with uv, use `--exclude-newer`. This appears to require a timestamp; so if you always want things up to X days old you'll have to recalculate.
That said, I hard pin all our dependencies and get dependabot alerts and then look into updates manually. Not sure if I'm a rube or if that's good practice.
Unfortunately you need to `npm login` with username and password in order to publish the very first version of a package to set up OIDC.
> The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file.
Full source bootstrapped NPM with manually reviewed dependencies is the only reasonably secure way to use NodeJS right now.
Funny that this is getting downvoted, but it installs dependencies super fast, and has the same approval feature as pnmp, all in a simple binary.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.
[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
A cooldown is a good idea, though.
Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)
We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.
I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.
Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.
(This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.)
Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter.
I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.
That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine.
And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place.
uv lock --exclude-newer $(date --iso -d "24 hours ago")
uv is considering a native relative date:https://www.npmjs.com/package/npm-check-updates#cooldown
In one command:
npx npm-check-updates -c 7> Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period.
Seems like a big drawback to this approach.
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
We've rotated keys and passwords, unpublished all affected packages and have pushed new versions, so make sure you're on the latest version of our SDKs.
We're still figuring out how this key got compromised, and we'll follow up with a post-mortem. We'll update status.posthog.com with more updates as well.
Sure, it might be a little bit of noise, but if you get a notice @ 3am of an unexpected publishing, you can jump on unpublishing it.
Probably even safer to not have been on the latest version in the first place.
Or safer again not to use software this vulnerable.
Nearly all software you use is susceptible to vulnerabilities, whether it's malicious or enterprise taking away your rights. It's in bad taste to make a comment about "not using software this vulnerable" when the issue was widespread in the ecosystem and the vendor is already being transparent about it. The alternative is you shame them into not sharing this information, and we're all worse for it.
A short time ago, I started a frontend in Astro for a SaaS startup I'm building with a friend. Astro is beautiful. But it's build on Node. And every time I update the versions of my dependencies I feel terrified I am bringing something into my server I don't know about.
I just keep reading more and more stories about dangerous npm packages, and get this sense that npm has absolutely no safety at all.
This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.
The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.
So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.
The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.
This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?
That and the package runtime runs with all the same privileges and capabilities as the thing you're building, which is pretty insane when you think about it. Why should npm know anything outside of the project root even exists, or be given the full set of environment variables without so much as a deny list, let alone an allow list? Of course if such restrictions are available, why limit them to npm?
The real problem is that the security model hasn't moved substantially since 1970. We already have all the tools to make things better, but they're still unportable and cumbersome to use, so hardly anything does.
When I download a C project, I know that it only depends on my system libraries - which I trust because I trust my distro. Rust seems to expect me to take a leap in the dark, trusting hundreds of packagers and their developers. That might be fine if you're already familiar with the Rust ecosystem, but for someone who just wants to try out a new program - it's intimidating.
Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
But realistically, I think the sum total of compromises via package managers attacks is much smaller than the sum total of compromises caused by people rolling their own libraries in C and C++.
It's hard to separate from C/C++'s lack of memory safety, which causes a lot of attacks, but the fact that code reuse is harder is a real source of vulnerabilities.
Maybe if you're Firefox/Chromium, and you have a huge team and invest massive efforts to be safe, you're better off with the low-dependency model. But for the median project? Rolling your own is much more dangerous than NPM/Cargo.
> The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise.
There was no decision in case of C/C++; it was just not a thing languages had at the time so the language itself (especially C) isn't written in a way to accommodate it nicely
> Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Any code you download and compile is running code on dev machine; and Go does have tools to do that in compile process too.
I do however like the by default namespacing by domain, there is no central repository to compromise, and forks of any defunct libs are easier to manage.
- Packages are always namespaced, so typosquating is harder - Registries like Sonatype require you to validate your domain - Versions are usually locked by default
My professional life has been tied to JVM languages, though, so I might be a bit biased.
I get that there are some issues with the model, especially when it comes to eviction, but it has been "good enough" for me.
Curious on what other people think about it.
1) No one forces you to use dependencies with large number of transitive dependencies. For example, feel free to use `ureq` instead of `reqwest` pulling the async kitchen sink with it. If you see an unnecessary dependency, you could also ask maintainers to potentially remove it.
2) Are you sure that your project is as simple as you think?
3) What matters is not number of dependencies, but number of groups who maintain them.
On the last point, if your dependency tree has 20 dependencies maintained by the Rust lang team (such as `serde` or `libc`), your supply chain risks are not multiplied by 20, they stay at one and almost the same as using just `std`.
The safest code is the code that is not run. There is no lack of attacks targeting C/C++ code, and odin is just a hobby language for now.
It's also very confusing (and I think those attack vectors benefit exactly from that), since you have a dependency but the dep itself dependent on another dep version.
Building basic CapacitorJS / Svelte app as an example, results many deps.
It might be a newbie question, but, Is there any solution or workflow where you don't end up with this dependency hell?
Things like cargo-vet help as does enforcing non-token auth, scanning and required cooldown periods.
The alternative that C/C++/Java end up with is that each and every project brings in their own Util, StringUtil, Helper or whatever class that acts as a "de-facto" standard library. I personally had the misfortune of having to deal with MySQL [1], Commons [2], Spring [3] and indirectly also ATG's [4] variants. One particularly unpleasant project I came across utilized all four of them, on top of the project's own "Utils" class that got copy-and-paste'd from the last project and extended for this project's needs.
And of course each of these Utils classes has their own semantics, their own methods, their own edge cases and, for the "organically grown" domestic class that barely had tests, bugs.
So it's either a billion "small gear" packages with dependency hell and supply chain issues, or it's an amalgamation of many many different "big gear" libraries that make updating them truly a hell on its own.
[1] https://jar-download.com/artifacts/mysql/mysql-connector-jav...
[2] https://commons.apache.org/proper/commons-lang/apidocs/org/a...
[3] https://docs.spring.io/spring-framework/docs/current/javadoc...
[4] https://docs.oracle.com/cd/E55783_02/Platform.11-2/apidoc/at...
If your Rust software observes a big enough chunk of the computer fever dream you are likely to end up with 2-3 digit amount of Rust dependencies, but they are probably all going to be high profile ones (tokio, anyhow, reqwest, the hyper crates, ...), instead of niche ones that never make it into any operating system.
This is not a silver bullet of course, but there seems to be an inverse correlation between "is part of any operating system dependency tree" and "gets compromised in an npm-like incident".
What would actually stop this is writing compilers and build systems in a way that isolates builds from one another. It's kind of stupid that all a compiler really needs is an input file, a list of dependencies, and an output file. Yet they all make it easy to root around, replicate and exfiltrate. It can be both convenient and not suffer from these style of attacks.
Must read: https://wiki.alopex.li/LetsBeRealAboutDependencies
TL;DR: ditch crates.io and copy Go with decentralized packages based directly on and an extended standard library.
Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
On the other hand, C / C++ style dependency management is even worse than Rust's... Both in terms of development velocity and dependencies that never get updated.
My real worry, for myself re the parent comment is, it's just a web frontend. There are a million other ways to develop it. Sober, cold risk assessment is: should we, or should we have, and should anyone else, choose something npm-based for new development?
Ie not a question about potential risk for other technologies, but a question about risk and impact for this specific technology.
Every time I fire up "cmake" I chant a little spell that protects me from the goblins that live on the other side of FetchContent to promise to the Gods of the Repo that I will, eventually, review everything to make sure I'm not shipping poop nuggets .. just as soon as I get the build done, tested .. and shipped, of course .. but I never, ever do.
(These are arguably good things in other contexts.)
I installed the package, obviously I intend to run it. How does getting pwned once I run it manually differ from getting pwned once I install it? I’m still getting pwned
I understand that there's been some course correction recently (zero dependency and minimal dependency libs), but there are still many devs who think that the only answer to their problem is another package, or that they have to split a perfectly fine package into five more. You don't find this pattern of behavior outside of Node.
Totally 100% agree, though tools like cargo tree make it more of a tractable problem, and running vendored dependencies is first class at least.
The one I am genuinely most concerned of is Golang. The way Dependencies are handled leaves much to be desired, I'm really surprised that there haven't been issues honestly.
If this were true, wouldn't there have been at least one Maven attack by now, considering the number of NPM attacks that we've seen?
And I am genuinely thinking to myself, is this making using npm a risk?
Of course using more JSR packages does start to add more reason to prefer Deno to Node. Also, there are still some packages that are deno.land/x/ only (sort of the first version of JSR, but no npm cross-compatibility) worth checking out. For instance, I've been impressed with Lume [1], a thoughtful SSG that's sort of the opposite of Astro in that it iterates at a slow, measured pace, and doesn't try to be a kitchen sink but more of workbench with a lot of tools easy to find. It's deno.land/x/ only for now for reasons I don't entirely agree with but I can't deny that JSR can be quite a step up in publishing complexity for not exactly obvious gain.
[0] https://jsr.io/
* Many dependencies, so much you don't know (and stop caring) what is being used.
* Automatic and regular updates, new patch versions for minor changes, and a generally accepted best practice of staying up to date on the latest versions of things, due to trauma from old security breaches or big migrations after not updating for a while.
* No review, trust based self-publishing of packages and instant availability
* untransparent pre/postinstall scripts
The fix is both cultural and technological:
* Stop releasing for every fart; once a week is enough, only exception being critical security reasons.
* Stop updating immediately whenever there's an update; once a week is enough.
* Review your updates
* Pay for a package repository that actually reviews changes before making them widely available. Actually I think the organization between NPM should set that up, there's trillion dollar companies using the Node ecosystem who would be willing and able to pay for some security guarantees.
But in reality it has nothing to do with node/js. It’s just because it’s the most used ecosystem. So I really don’t understand the argument of not using node. Just be mindful of your dependencies and avoid updating every day.
I know this is a controversial approach, but it still works well in our case.
"require": { "php": ">=8.0",
"ext-mbstring": "*",
"bcosca/fatfree-core": "3.9.1",
"phpmailer/phpmailer": "6.9.3",
"ruler/ruler": "0.4.0",
"matomo/device-detector": "6.4.7" }
1. https://github.com/tirrenotechnologies/tirrenoDoes it require disciple and a project not run by developers who just learned program? You betcha.
I also switched to Phoenix using Js only when absolutely necessary. Would do the same on Laravel at work if switching to SSR would be feasible...
I do not trust the whole js ecosystem anymore.
An eco-system, if it insists on slapping on a package manager (see also: Rust, Go) should always properly evaluate the resulting risks and put proper safeguards in place or you're going to end up with a massive supply chain headache.
The ones that most people use and some people complain about, and the ones that nobody uses and people keep advocating for.
It also ignores the central question of whether NPM is more vulnerable to these attacks than other package managers, and should therefore be considered an unreasonable security risk.
They are built for programmers, not users. They are designed to allow any random untrusted person to push packages with no oversight whatsoever. You just make an account and push stuff. I have no doubt you can even buy accounts if you're malicious enough.
Users are much better served by the Linux distribution model which has proper maintainers. They take responsibility for the packages they maintain. They go so far as to meet each other in person so they can establish decentralized root of trust via PGP.
Working with the distributions is hard though. Forming relationships with people. Participating in a community. Establishing trust. Working together. Following packaging rules. Integrating with a greater dynamic ecosystem instead of shipping everything as a bloated container whose only purpose is to statically link dynamic libraries. Developers don't want to do any of that.
Too bad. They should have to. Because the npm clusterfuck is what you get when you start using software shipped by totally untrusted randoms nobody cares to know about much less verify.
Using npm is equivalent to installing stuff from the Arch User Repository while deliberately ignoring all the warnings. Malware's been found there as well, to the surprise of absolutely no one.
When it comes to frontend, well I don't have answers yet.
I think we have given the Typescript / Javascript communities enough time. These sort of problems will continue to happen regardless of the runtime.
Adding one more library increases the risk of a supply-chain attack like this.
As long as you're using npm or any npm-compatible runtime, then it remains to be an unsolved recurring issue in the npm ecosystem.
There are some things that kind of suck (working with time - will be fixed by the Temporal API eventually), but you can get a lot done without needing lots of dependencies.
Serious answer: no.
I think I'm going to just use a static site generator, maybe add some WASM modules built with a language that has a sane package manager and enjoy my life instead of getting involved with this cluster of a show.
You need standalone dependencies, like Tailwind offers with its standalone CLI. Predators go where there prey is. NPM is a monoculture. It's like running Windows in the 90's; you're just asking for viruses. But 90% of frontend teams will still use NPM because they can't figure anything else out.
And maybe don't update your dependencies very often.
Take ruby - even before when a certain corporation effectively took over RubyCentral and rubygems.org, almost two years ago they also added a 100.000 download limit. That is, after that threshold was passed, the original author was deprived of the ability to remove the project again - unless the author resigns from rubygems.org. Which I promptly did. I could not accept any corporation trying to force me into maintaining old projects (I tend to remove old projects quickly; the licence allows people to fork it, so they can maintain it if they want to, but my name can not be associated with outdated projects I already abandoned, since newer releases were available. The new corporate overlords running rubygems.org, who keep on lying about "they serve the community", refused to accept this explanation, so my time came to a natural end at rubygems.org. Of course this year it would be even easier since they changed the rules to satisfy their new corporate overlords anyway: https://blog.rubygems.org/2025/07/08/policies-live.html)
EDIT: Coffee hasnt kicked in yet, that was harsher than I intended. For what it's worth, it's not specifically/solely NPM/nodes fault, more of a convergance of the above and the ecosystem/users just as much as any of the Node/NPM devs/maintainers in combination with it having such a large attack cross section. Even if it had a reputation for being bulletproof and secure as fuck there's still such a large userbase with huge potential if exploited that'd it'd almost assuredly inevitably be compromised from time to time regardless.
While I feel we could use a whole lot less javascript on the web (client and server side both), without a competitor or something, it's shear size ensures any such expliot/issue gets amplified 1000x versus nearly any other project save for maybe major OS's and Browsers themselves.
I'm sure the list of available attacks are somewhat different, but you can get pwned in all of these ecosystems.
Please, no.
It is an absolutely terrible eco system. The layercake of dependencies is just insane.
This is something you also need to do with package managers in other languages, mind you.
I'm getting tired of the anti-Node.js narrative that keeps going around as if other package repos aren't the same or worse.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
How does this work? Every single npm package has tons of dependency tree nodes
Containers do not contain.
Most attacks on popular packages last at most a few months before detection.
Currently reverse engineering the malicious payload and will share our findings within the next few hours.
The website is a mess (broken links, broken UI elements, no about section)
There is no history on webarchive. There is no information outside of this website and their "customers" are crypto exchanges and some japanese payment provider.
This seems a bit fishy to me - or am I too paranoid?
Why in particular this community still insists on preemptively updating all deps always, on running complicated extra hooks together with package installation and pretending this all is good engineering practices? ("Look, we have so plenty of things and are so busy, thus it must be good")
Why certain kind of mindset is typical to this community?
Why the Node creator abandoned his creation years ago?
Why, oh why?
Your attempt to make it personal does not compute.
I didn't hate JS & npm with a passion before. I do now, and it hasn't even been two years.
https://about.gitlab.com/blog/gitlab-discovers-widespread-np...
Hey PostHog! What version do we need to avoid?
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
If you make sure you're on the latest version you should be good.
> "'No Way to Prevent This,' Says Only Nation Where This Regularly Happens" is the recurring headline of articles published by the American news satire organization The Onion after mass shootings in the United States.
Source: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
Well before the npm attacks were a thing, we within the Rust project, have discussed a lot of using wasm sandboxing for build-time code execution (and also precompiled wasm for procedural macros, but that's its own thing.) However the way build scripts are used in the Rust ecosystem makes it quite difficult enforce sandbox while also enabling packages to build foreign code (C, C++ invoke make, cmake, etc.) The sandbox could still expose methods to e.g. "run the C compiler" to the build scripts, but once that's done they have an arbitrary access to a very non-trivial piece of code running in a privileged environment.
Whereas for Javascript rarely does a package invoke anything but other javascript code during the build time. Introduce a stringent sandbox for that code (kinda deno style perhaps?) and a large majority of the packages are suddenly safe by default.
Another example: all Debian packages are published to unstable, but cannot enter testing for at least 2-10 days, and also have to meet a slew of conditions, including that they can be and are built for all supported architectures, and that they don't cause themselves or anything else to become uninstallable. This allows for the most egregious bugs to be spotted before anyone not directly developing Debian starts using it.
This does not prevent said package from shipping with malware built in, but it does prevent arbitrary shell execution on install and therefore automated worm-like propagation.
https://bootstrappable.org/ https://reproducible-builds.org/ https://github.com/crev-dev
As a SW developer, you may be able to limit the damage from these attacks by using a MAC (like SELinux or Tomoyo) to ensure that your node app cannot read secrets that it is not intended to read, conns that it should not make, etc. and log attempts to do those things.
You could also reduce your use of external packages. Until slowly, over time you have very little external dependencies.
AsyncAPI is used as the example in the post. It says the Github repo was not affected, but NPM was.
What I don't understand from the article is how this happened. Were the credentials for each project leaked? Given the wide range of packages, was it a hack on npm? Or...?
> it modifies package.json based on the current environment's npm configuration, injects [malicious] setup_bun.js and bun_environment.js, repacks the component, and executes npm publish using stolen tokens, thereby achieving worm-like propagation.
This is the second time an attack like this happens, others may be familiar with this context already and share fewer details and explanations than usual.
Previous discussions: https://news.ycombinator.com/item?id=45260741
Yes, if you depend on an infected package, sure. But then I'd expect not just a list, but a graph outlining which package infected which other package. Overall I don't understand this at all.
Discussion on HN last time: https://news.ycombinator.com/item?id=45326754
1. Don't
Or copy that repo’s markdown into an llm and ask it to map to the pip ecosystem
ignore-scripts=true
to your .npmrcCan't they just jam the malware into the package itself? It runs with the same permissions on my machine (in unit tests, node servers, etc).
Infact, do this for all risky tools[2]
1 - https://github.com/ashishb/dotfiles/blob/067de6f90c72f0cf849...
2 - https://ashishb.net/programming/run-tools-inside-docker/
The solutions that are effective also involve actually doing work, as developers, library authors, and package managers. But no, we want as much "convenience" as possible, so the issues continue.
Developers and package authors should use a lockfile, pin their dependencies, be frugal about adding dependencies, and put any dependencies they do add through a basic inspection at least, checking what dependencies they also use, their code and tests quality, etc.
Package managers should enforce namespacing for ALL packages, should improve their publishing security, and should probably have an opt-in verified program for the most important packages.
Doing these will go a long way to ameliorate these supply chain attacks.
Last time my perception was also that publishing sec is a weak point. If at least heavily used packages would be forced to do manual security steps for publishing, it would help quite a bit as long the measures a safe.
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
Just this morning, after trying to make the case over the past year, we had a change landed to remove more than a dozen dependencies from typescript-eslint! https://bsky.app/profile/benmccann.com/post/3m6fcjax7ec2h
Yay!
>Discord
...ew.
This is probably a common problem. Has anyone gotten verdaccio to enforce cool-down policies?
I also waste a ton of time because post-install scripts are disabled. Being able to cut them off from network access, and just run a local server with 2-4 week cool-down would help me sleep better at night + simplify the hell out of my build.
It's much easier to demonstrate a problem (twice!) than to convince a herd that there is a problem.
I hope that other languages with similar package manager (looking at you, cargo) take note.
com.foo.bar
That would require domain verification, but it would add significant developer friction.
Also mandatory Dune reference:
"Bless the maker and his water"
I think hijacked NPM packages are just the tip of the ice berg.
12 years ago NPM went down due to a very issue and getting stuff working again wasn't easy.
Someone got on GitHub and said something along the lines of "Get it together or get forked. "
I asked my manager if this was real or just a random guy talking. My manager, correctly said, it's just talk.
But it's different now. NPM is owned by Microsoft. One of the world's biggest companies should be able to sort things out.
Ohh well. Can't fix it.
My devs don't have access to production keys at all (and would never need them).
I guess it's not going to happen...
Also package manager should not run scripts.
In this Turing-equivalent world, you can only know what actually executes (e.g. eval, fetch) by actually executing all code in the package and then see what functions got executed. Then the problem is the same as virus analysis; the virus can be written to only act under certain conditions, it will probe (e.g. look at what intepreter fingerprints, get the time of day, try to look at innocuous places in filesystem or network, measure network connection times, etc), so that it can determine it is in a VM being scanned, and go dormant for that time.
So the only thing that actually works is if node and other JS evaluators have a perfect sandbox, where nothing in a module is allowed (no network, no filesystem) except to explicit locations declared in the module's manifest, and this is perfectly tracked by the language, so if the module hands back a function for some other code to run, that function doesn't inherit the other code's network/fs access permissions. This means that, if a location is not declared, the code can't get to it at scanning time nor install time nor any time in the future.
This still leaves open the door for things like a module defining GetGoogleAnalyticsURL(params) that occasionally returns "https://badsite.com/copyandredirect?ga=...", to get some other module to eventually make a credential-exfiltrating network call, even if it's banned from making it directly or indirectly...
Also, detecting obfuscated code sounds like an interesting and challenging task.
- https://socket.dev/blog/introducing-socket-firewall - https://github.com/lirantal/npq - https://bun.com/docs/pm/security-scanner-api
source: https://github.com/bodadotsh/npm-security-best-practices?tab...
https://gist.github.com/considine/2098a0426b212f27feb6fb3b4d...
It checks yarn.lock for any of the above. Maybe needs a tweak or two but you should be able to run from a directory with yarn.lock
That leads me to another point. Devs have to take responsibility for their code/projects. Everyone wants to blame npm or something else but, as software developers, you have to take responsibility for the systems you build. This means, among may other things, vetting code your code depends on and protecting the system from randomly updating itself with code you haven’t even heard about.
The ones that got my attentions are the @ensdomains/*, that are the legit packages and are probably in every Ethereum/EVM/blockchain related apps for the resolution of decentralized domain names.
A quick search shows those Ledger hardware wallet use those libs too [0]
So I guess they weren't just after API keys.
- [0] https://github.com/search?q=org%3ALedgerHQ%20%40ensdomain&ty...
([0-9a-z]{18})
So github has some tools available to mitigate some of the problems tied to it. Probably not perfect for all use cases. But considering the current scale, it doesn't seem to have any effect, as enough publishers seem not to care.
I think npm should force higher standards on popular packages.
I would love to see the equivalent of a linux distro, a curated set of packages and package versions that are known to be compatible and safe. If someone offered this as a paid product businesses would pay for it.
That's a wake up call to harden your operations. NPM Tokens, AWS/GCP/Azure credentials have no reason to be available in environments where packages may be installed. The same goes for sensitive environment variables.
At minimum whatever you are working on should be built in docker. The package installation then would happen during the image build step. Yes it's easy to break out of the isolation environment but i am betting this malware does not.
NPM tokens should exist in some configuration/secret management solution not on your home directory. Devs have no business holding the NPM tokens. Same goes for sensitive environment variables they have no business existing on dev laptops or even the pipeline build steps (where package installation should happen).
AWS etc credentials / tokens are harder to secure since there are legit reasons for existing in dev laptops.
And if it's a "professional" setting, the company could hire a part-time developer for writing the sandbox.
If I want more stability for my OS I can choose Debian-stable rather than Ubuntu-nightly.
But for npm, there doesn't seem to be the same choice available. Either I sign up to the fire-hose or I don't.
I can choose to only upgrade once a month, but there's a chance I'm still getting a package that dropped 5 minutes before.
minimumReleaseAge: 43200If you must use npm, containerize/VM it? treat it as if you're observing malware.
minimumReleaseAge strikes a good balance between protecting yourself against emerging threats like Shai-Hulud and keeping your dependencies up-to-date.
Because you asked: you can get another layer of protection through Socket Firewall Free (sfw), which prevents dependencies known to be malicious from being installed. Socket typically identifies malware very soon after its is published. Disclaimer: I’m the lead dev on the project, so obviously biased — YMMV.
https://github.blog/security/supply-chain-security/our-plan-...
I'm guessing no one yet wants to spend the money it takes for centralized, trusted testing where the test harnesses employ sandboxing and default-deny installs, Deterministic Simulated Testing (DST), or other techniques. And the sheer scale of NPM package modifications per week makes human in the loop-based defense daunting, to the point that only a small "gold standard" subset of packages that has a more reasonable volume of changes might be the only palatable alternative.
What are the thoughts of those deep inside the intersection of NPM and cybersecurity?
Really looking forward to a deeper post-mortem on this.
Pnpm also blocks preinstall scripts by default.
There's nothing wrong with pinning dependencies and only updating when you know for sure they're fixing a zero-day (as it will be public at that point).
This is why it's so important to get to know what you're actually building instead of just "vibing" all the time. Before all the AI slop of this decade we just called it being responsible.
The solutions that are effective also involve actually doing work, as developers, library authors, and package managers. But no, we want as much "convenience" as possible, so the issues will continue.
Developers and package authors should use a lockfile, pin their dependencies, be frugal about adding dependencies, and put any dependencies they do add through a basic inspection at least, checking what dependencies they also use, their code and tests quality, etc.
Package managers should enforce namespacing for ALL packages, should improve their publishing security, and should probably have an opt-in verified program for the most important packages.
Doing these will go a long way to ameliorate these supply chain attacks
"Our security engineering team is investigating the matter and thus far has concluded that while some public Postman NPM packages were infected, (1) Postman as an app is not compromised, and (2) our production cloud services are also not compromised."
It’s basically a lightweight CLI tool you can run directly inside any local project:
npx sha1-hulud-scanner
Repo is here:
https://github.com/developerjhp/sha1-hulud-scannerIt’s not meant to be a full security product — just a simple “first-pass” detector that helps catch unexpected checksum strings or injected artifacts before they slip into CI. Feedback and contributions are welcome!
Like previous variant, it has credential harvesting, self-replication and GitHub public repository based exfiltration.
Double base64 encoded credentials being exposed using GitHub repositories: https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%...
Would be good to see projects (like those recently effected) nudging devs to do this via install instructions.
Going forward, use WASM if you really want to make an SPA (and think about that choice), where the source language is not something that ties into the JS dependency ecosystem. Ban it and burn it with fire for anything on the backend, for christ.
npm audit
and npm audit --fix
Or if you want to know the version of a package you have installed: npm ls some-pkgThe answer is really, don't.
NPM and the JS eco-system has really gone down a path of zero security and they're paying the price for it.
If you really need libraries from NPM and whatnot, vendorize them so you're relying on known-safe files and don't arbitrarily update them without re-verification.
We really don't need more package managers other than the ones provided by your operating system, but I dunno maybe its just me.
Including headers isn't remotely "simple". There's so many considerations in linking, .SO version compatibility, architecture and instruction set issues, building against multiple versions on the same system. Or if you want to feel frustrated in a single word: GDAL (IYKYK)
And that's only where #include is even applicable. That is not gonna fly for any interpreted language - JS in this case, but also python, ruby, php.
Just `git clone git@github.com:openai/codex.git`, `cd codex-rs`, `cargo build --release` (If you have many cores and not much RAM, use `-j n`, where n is 1 to 4 to decrease RAM requirements)
SHA1-Hulud the Second Comming – Postman, Zapier, PostHog All Compromised via NPM
https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting...
instead we've got this absolute mess of bloated, over-engineered junk code and ridiculously complicated module systems.
if you run `npm i ramda` it will set this to "ramda": "^0.32.0" (as of comment)
that ^ means install any version that is a feature or patch.
so when a package is released with malware they bump version 0.32.1 and everyone just installs it on next npm i.
pinning your deps "ramda": "0.32.0" completely removes the risk assuming the version you listed is not infected.
the trade off is you don't get new features/patches without manually changing the version bump.
I see that as a desirable feature. I don’t want new functionality suddenly popping into my codebase without one of my team intending it.
- A new version of this dependency is published
- A CI somewhere of another NPM package uses this new version dependency in a build, which trigger propagation by creating a new modified version of this dependency?
- And so on...
Am I getting this right?
in case folks find it helpful: https://github.com/kevinslin/safe-npm
Implementing everything yourself probably won't cut it.
Copying a dependency into your code base and maintaining it yourself probably won't yield much better results.
However, if a dependency would be part of the version control, depends could at least do a code review before installing an update.
That wouldn't help with new dependencies, that come in with issues right from.the start, but it could help preventing new malware from slipping in later.
A setup like that could benefit from a crowd-sourced review process, similar to Wikipedia.
I think, Nimble, the package manager of Nim, uses a decentralised registry approach based on Git repos. Something like that could be a good start.
typo after the listed affected packages
My ultra hot take: there are only¹ two² programming ecosystems suitable for serious³ work:
- .net (either run on CLR or compile as an AOT standalone binary)
- jvm
The reason why is because they have a vast and vetted std lib. A good standard lib is a bigger boost then any other syntactic niceties. __
1. I don't want other programming languages to die, so I am happy if you disagree with me. Other valid objection: some problems are better served by niche languages. Still, both .net and java support a plethora of niche languages.
2. Shades of gr[e|a]y, some languages are more complete out of the box than others.
3. cf «pick boring tools»I don't ask you to judge if you like it, I'm just saying that you can totally make a professional WebUI within the dotnet stdlib.
Don't take my word for it, take a dive. You wouldn't be the first to have adjust their view.
For example, this section is just about the built-in web framework asp.net: https://learn.microsoft.com/en-us/aspnet/core
______
1. This might be a poor example as .net has NodaTime and the jvm has YodaTime as 3rd-party libs, for if one has really strict needs. Still, the builtin DateTime constructs offer way more than what Python had to offer.
2. Don't get me started on the ORM side of things. I know, you don't have to use one, but if you do, it better does a great job. And I wouldn't bat an eye if the ORM is not in the standard, but boy was I disappointed in Python's ecosystem. EF Core come batteries included and is so much better, it isn't fun anymore.
That seems a bit silly. Even on the beefy boi I used to work on a 10MB hiccup in deployable size would have been sufficient to make me look.
I released one of the packages I work on last night so of course this drew my eye. I assume checking the unpacked size hasn’t gotten ridiculous confirms that your code is not infected yeah? And looks like it’s past time for me to set up a separate account for release management.
At some point, someone has to pay for an organisation whose job will be to review the contents of all of these modules.
Maybe one could split the ecosystem into "validated" and "non validated" stacks ? much like we have stable and dev branches ?
The people validating would of course give their own identity to build trust. And so, companies (moral person) should do that.
As it arguably would have reduced impact
(I'm one of the Renovate maintainers and have recently pushed for this to be more of a widely used feature)
shinhan is a large korean bank and this admin area geo json util seems to be embedded in many korean gov services.
shinhan-limit-scrap
korea-administrative-area-geo-json-util
The number and range of affected devices may be reduced with any number of package manager level workarounds, but NOT the impact of attacks once any succeeds. For this, you NEED the above.
All libraries should strive to have dependency only on it.