Which is basically phishing:
> The meeting link itself directed to a spoofed Zoom meeting that was hosted on the threat actor's infrastructure, zoom[.]uswe05[.]us.
> Once in the "meeting," the fake video call facilitated a ruse that gave the impression to the end user that they were experiencing audio issues.
> The recovered web page provided two sets of commands to be run for "troubleshooting": one for macOS systems, and one for Windows systems. Embedded within the string of commands was a single command that initiated the infection chain.
...
it seems the correct muscle memory response to train into people is that "if some meeting link someone sent you doesn't work, then you should create one and send them the link"
(and of course never download and execute anything, don't copy scripts into terminals, but it seems even veteran maintainers do this, etc...)
see Infection Chain here https://cloud.google.com/blog/topics/threat-intelligence/unc...
textarea at the bottom of this comment: https://github.com/axios/axios/issues/10636#issuecomment-418...
Arrgh. You're looking at the closest thing to a root cause and you're just waving over it. The culture of "just paste this script" is the problem here. People trained not to do this (or, like me, old enough to be horrified about it and refuse on principle) aren't vulnerable. But you just... give up on that and instead view this as a problem with "muscle memory" about chat etiquette?
Good grief, folks. At best that's security theater.
FWIW, there's also a root-er cause about where this culture came from. And that's 100% down to Apple Computer's congenital hatred of open source and refusal to provide or even bless a secure package management system for their OS. People do this because there's no feasible alternative on a mac, and people love macs more than they love security it seems.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
Operate under the assumption all accounts will be taken over because centralized corporate auth systems are fundamentally vulnerable.
This is how you actually fix it:
1. Every commit must be signed by a maintainer key listed in the MAINTAINERS file or similar
2. Every review/merge must be signed by a -second- maintainer key
3. Every artifact must be build deterministically and be signed by multiple maintainers.
4. Have only one online npm publish key maintained in a deterministic and remotely attestable enclave that validates multiple valid maintainer signatures
5. Automatically sound the alarm if an NPM release is pushed any other way, and automatically revoke it.
Yet most developers I work with just use it reflexively. This seems like one of the biggest issues with the npm ecosystem - the complete lack of motivation to write even trivial things yourself.
Then you would have created just an axios clone. AKA re-inventing the wheel. The issue isn't the library itself, but rather the fact that it's popular and provided a large enough attack surface.
You can actually just clone the axios package and use it as is from your private repo and you would not have been affected.
The wheel is the native fetch API, nobody needs to reinvent it.
All you'd do in that scenario is make your own hubcap to put on top.
I use "xhr" via fetch extensively, it can do everything in day to day business for years with minimal boilerplate.
(The only exception known to me being upload progress/status indication)
The multiple supply chain attacks against NPM packages would, of course, be solved if we simply stop using third-party libraries.
const x = await fetch(...); await x.json();
"intercept" code that runs before every request?
const withAuth = (res, options) => fetch(res, { ... do stuff here });
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
I mean the compromised machine registers itself on the command server and occasionally checks for workloads.
The hacker then decides his next actions - depending on the machine they compromised they'll either try to spread (like this time) and make a broad attack or they may go more in-depth and try to exfiltrate data/spread internally if eg a build node has been compromised
I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.
Before each action you need to enter your 2fa code.
I got so frustrated with npm end of last year that I wrote a whole guide covering that issue: https://npmdigest.com/guides/npm-trusted-publishing
Still needs to be published first, but looks like it automates all the annoying UI things you mentioned.
Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread
(As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)
the implicit trust we have in maintainers is easily faked as we see
Interesting it got caught when it did.
Given the "extreme vigilance" of the primitive "don't install unknown something on your machine" level is unattainable, can there really be an effective project-level solutions?
Mandatory involvement of more people to hope not everyone installs random stuff, at least not at same time? (though you might not even have more people...)
Seems to me that one drastic tactic NPM could employ to prevent attacks like this is to use hardware security. NPM could procure and configure laptops with identity rooted in the laptop TPM instead of 2FA. Configure the NPM servers so that for certain repos only updates signed with the private key in the laptop TPM can be pushed to NPM. Each high profile repo would have certain laptops that can upload for that repo. Set up the laptop with a minimal version of Linux with just the command line tools to upload to NPM, not even a browser or desktop environment. Give those laptops to maintainers of high profile repos for free to use for updates.
Then at update time, the maintainer just transfers the code from their dev machine to the secure laptop via USB drive or CD and pushes to NPM from the special laptop.
Adding postinstall should require approval from NPM. NPM clients should not install freshly published packages. NPM packages should be scanned after publishing. High profile packages should verify upstream git hash signature. NPM install should run in sandbox and detect any attempt to install outside project directory.
But npm being part of multi trillion company cannot be bothered to fix any of these. Instead they push for tighter integration with GitHub with UX that suck.
That would be a beautiful example of Cobra effect: what about updates that fix vulnerabilities? You're gonna force users to wait couple days or a week before they can get malware removed?
The problem would be new versions that fix security issues though, and because this is all open source as soon as you publish the fix everyone knows the vulnerability. You wouldn’t want everyone to stay on the insecure version with a basically public vulnerability for a week.
Point 4 from https://npmdigest.com/guides/npm-trusted-publishing#ux-probl...
(I wrote that guide page for myself because I always get annoyed when dealing with npm OIDC)
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Normalized negligence is still negligence.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
Even if they did sign the code, What's stopping them slipping some crypto link in. And do they also need to check all the transitive depdencies in their code?
If maintainers really cannot afford that, they should flag it as a major big bold print supply chain risk on the readme: "We cannot afford 4 yubikeys for our maintainers and thus all code is signed with software keys in virtual machines as a best effort defense. Donate to our fund [here] to raise $500 for dedicated release hardware"
Friends and I have gotten 100s of yubikeys and nitrokeys donated to FOSS maintainers, but FOSS maintainers have to be willing to say they would use them and signal that they need them.
Honestly though, anyone that cannot afford $40 I expect is at high risk of being bribed or having to give up contributing to take on more work, so we should significantly fund any project signaling that much desperation.
No. As a user of your package, I want assurance that the package you publish does what it says it does and does not contain malware. This is different from the package having been published by you. I want protection against you going rogue, not only from you being impersonated. 2FA on your side does not protect me against you going rogue. A comaintainer does.
So the correct quote would be: Anyone that cannot find a comaintainer to review all the code and to prevent deliberate sabotage has absolutely no business writing widely depended upon FOSS software.