> npm encourages the use of semver, or semantic
> versioning. With semver, dependencies are not locked to
> a certain version by default. For any dependency of a
> package, the dependency author can push a new version of
> the package.
I don't see how this has anything to do with semver. Semver doesn't say anything about not locking dependencies to a certain version (i.e., locking to a specific version is totally legal), nor does it have anything to do with allowing package authors to push new versions of their packages (I'm not even sure how to parse this sentence, really... should it be impossible to ever push new versions of a package? (EDIT: maybe it's suggesting there should be a central review process, like the iOS App Store?)).In fact, the semver spec doesn't even advocate automatically upgrading when new patch versions are released:
"As a responsible developer you will, of course, want to verify that any package upgrades function as advertised. The real world is a messy place; there’s nothing we can do about that but be vigilant."
As a library maintainer, patches breaking the library is something that happens (not often, but still) - testing can eliminate a lot but not all bugs.
That's not NPM's fault, its the fault of the way you deploy code. Even if you locked down a version, git is mutable so someone could change their code. That's why I "rsync" the code to production, so I know its the same as development.
It would be nice if there would be a tool that would allow developers to mark a new release as safe. Every package would have it's social safety score and you could decide if you want to investigate a release further.
Being a programmer you may, of course, try to automate that verification process with something like greenkeeper.io. That opens up its own kind of exploit opportunities.
I mean really the idea is just that if someone got somebody else's password, they could use it to trick other people into installing a program. Even email has this problem. So really the only thing NPM could be accused of here is not doing more to make publishing secure (like using two-factor authentication).
Having said this, we'd like to make exploits such as those discussed in #319816 as difficult as possible. We're exploring supporting new authentication strategies: such as 2-factor authentication, SAML, and asymmetric key based authentication (some of these features are already available in our Enterprise product, but haven't made it to the public registry yet). npm's official response has more details on this subject:
http://blog.npmjs.org/post/141702881055/package-install-scri...
Linux package managers are a different story of course.
An easy way to undo a publish would also be useful.
Any application package manager with a lockfile-based-workflow (like Bundler, Cocoapods, Cargo, etc.) would at least have this mitigation as a default part of the workflow.
a way to protect you 100% against the problem is to define your dependency as a link to a specific commit or tarball.
The npm team is working on 2fac (https://twitter.com/seldo/status/713623991349411840) which will be an adequate solution to this issue.
1) No install scripts. Fetching a dependency in NPM will execute arbitrary code. Fetching a dependency in Maven doesn't execute any code from the dependency. Obviously when I run my project, I'm expecting to call code in that dependency, so this is a mitigation not a complete fix. But that does lead on to the next point.
Corollary: You have to change the code in my project to spread the worm, not just add a new dependency, otherwise your worm code won't get executed. This is probably a bit more tricky to get right.
2a) Code deployed internally from CI servers, not local machines. It's got to be code-reviewed before it gets pushed to my employer's package repository.
2b) Code needs to be signed before being uploaded to Maven Central. I'm not going to start typing my GPG key into random unexpected prompts.
Malicious code is still a possibility, but the scope for a worm is much less.
A better fix to this issue is to require publishers to enter a two-factor token, to email them to confirm publishing, or the like.
Yeah, it makes everyone a bit uneasy with how much trust is involved in the ecosystem. Is there a better solution?
Actually I'm surprised that npm uses some kind of scripts. All I want is to download some JS files. Why is there any scripts at all? I guess it's needed for native compilation, but it's a lazy solution, there could be better solutions.
I understand the danger inherent in this system, and actually do keep an eye on dependencies I require. All that said, it's certainly a lot easier to have npm install handle fetching and building native libraries than it is to figure out a way to manually get those libraries attached to the node package (wait, did I install that in /opt, /usr/local, etc etc).
Ultimately, I'm downloading code someone else wrote and executing it. Yes, post- and pre-install hooks are low hanging fruit for malicious exploitation, but so is installing any large library, you can just as easily put Bad Code in a library you distribute for any other language and wait for someone to run it. The difference here is that there's an exploit possible at install time, rather than runtime.
In some cases it's even accepted practice to put the actual username/password in the clear in a dotfile, which means anyone who can even read a file from the users home directory, can gain persistent access to push packages as them...
It would be nice if npm didn’t run arbitrary install scripts by default…
To me, the fact that this "vulnerability" requires explicit user action, akin to deliberately downloading and running malware, says that it's really a property of all software ecosystems in which people can publish and disseminate freely.
In that respect, it's nice to see a "this is as intended" response instead of the typical direction of coming up with a set of more draconian policies and processes merely to protect users from themselves.
But given what "security research" these days seems to involve, I can almost imagine in the future: "Vulnerability #1048576 - computer allows users to perform potentially malicious actions."
Once it reaches a package like left-pad that is used by a ton of libraries, it will instantly infect hundreds of thousands of developers.
1) Pin your packages to a specific version. If you aren't doing this already they you are in for a world of hurt when someone who doesn't know what they are doing releases a breaking package change on a minor version number.
2) Shrinkwrap your packages. Once again if you aren't already doing this then you npm install will probably break about once per three months when someone pushes a bad package to NPM.
3) Publish your NPM packages from an NPM in one vagrant development environment and run your code that installs from NPM in another vagrant development environment. If you have one shared environment then you are going to have other issues of which the small chance of an NPM worm is probably going to be the least of your worries.
I mean, if I distributed a library through some other package manager system, like a .jar file or some code that you install via homebrew, pip, or ./configure.sh && make, I can embed malicious code in the source somewhere. Maybe not all automated package managers are quite as vulnerable to install hooks, but all open source code is vulnerable to trust attacks, at runtime if nowhere else. I ultimately trust the process that gives me nginx enough to let it serve up my code, hoping there's not a backdoor somewhere that is shoving environment variables (and therefore API keys) out the window to a hacker.
You can't assume people are going to review every line of source before they link against a library. You can't assume people aren't going to click that link that looks like a download link on a sourceforge page but is, in fact, a crapware link. People make mistakes all the time.
So, yeah, there's probably room to make npm a little more robust and difficult to specifically target as a vector. But thousands of developers are still going to be writing sass, and using node-sass to build that, which needs to download, compile and execute a binary on the devbox. Making the installation process of libsass take an extra step or two is great and all (and annoying, and probably likely to degrade windows node development most of all, since windows libraries are harder to put in a "standard" place if you're a non-windows dev writing a node library), but people are still going to be running libsass binaries on their local machine without auditing it, trusting that the developers there have good opsec and review everything well.
On the other hand, all this publicity means someone's bound to actually try and build stuff that exploits trust here, either wormlike or just executing an rm -rf in an install hook. So, my trust levels are lowered and my productivity impaired because I'll be auditing more closely all the updates to existing plugins I'm using. Win?
The OSS world has grown precipitously in the era of GitHub/npm/etc, and the trust model hasn't caught up. It's not tenable to maintain a GPG keychain for a nested tree of 100 dependencies. Neither is it advisable to keep deferring this problem. We need to come up with a solution for tracking reputation and trustworthy dependencies at this new scale. It's not simply a problem that package repositories like npm can solve for us -- the scope of this problem is human, and an ideal solution will work for both users and developers, and apply to source distributions and multiple package repositories. One of the few silver linings of the events of the last week is that more people are aware of and pondering these issues. I hope we'll see some more discussion and experimentation in this space!
An easy thing you could do right now is to put an attestation directory right into your git repo. Then write up your comments (maybe in a file format similar to what you're doing with signet) and do a signed commit into that directory.
http://www.dwheeler.com/essays/scm-security.html
Dig into archive.org for Shapiro's OpenCM while you're at it as it had a lot of nice properties. Aegis seemed to as well. Pulling good traits from Wheeler's survey into modern ones would be a good idea. Also, one can re-develop OpenCM, Aegis, etc to have modern features like plugins for common languages/apps or DVCS capabilities.
SCM security techniques date back to 80's-early 90's. No excuse for today's solutions to still lack the basics.
Most of the programming language package repositories (e.g. npm, rubygems, PyPi, NuGet) have this kind of installation process and limited/no checks for malicious content.
Also as there's no consistent use of package signing by the developer (it's either unsupported or not very used) there is also a risk of the repository itself being compromised.
I did a talk last year for OWASP AppSecEU that covers this kind of thing. https://www.youtube.com/watch?v=Wn190b4EJWk
https://caremad.io/2013/07/packaging-signing-not-holy-grail/
For the problem this blog post talks about, I personally think that keybase is the right solution. You can tie a key to a github repository amongst others and then validate that the package you're installing came from the person who put the code on github in the first place...
http://www.dwheeler.com/essays/scm-security.html
He has lots of nice links, too. Hope you can factor some of it into your talks to get it to mainstream audience. I've saved the vid to check it out later. Will be interesting to see an experienced perspective with the modern tooling.
However, I admit I've always thought updating Shapiro's OpenCM or Aegis to distributed style with plugins for modern tooling gets us 90+% percent of the way. Without the problems of 90+% of build and package mgmt systems. ;)
● Automatically expire login tokens
● Require 2 factor auth for publish operations
● Help users be logged out during install operations
vjeux mentioned a few others on HN a few days back[2]:
● pre-install/post-install scripts should require user to accept or refuse.
● make shrinkwrap by default (and fix all the issues with it) so that running npm install doesn't use different versions when used over time.
● make updating a version an explicit decision via npm upgrade
[1] https://www.kb.cert.org/CERT_WEB/services/vul-notes.nsf/6eac... [2] https://news.ycombinator.com/item?id=11341145
In the meantime, users may want to consider one of the following:
npm config set ignore-scripts true
npm logout >
>As a user who owns modules you should not stay logged into npm. (Easily enough, npm logout and npmlogin)
>Use npm shrinkwrap to lock down your dependencies
>Use npminstall someModule --ignore-scripts
>
I would add to toss a glance at the libraries you import every once in a while. Just to make sure they look sane.https://blogs.msdn.microsoft.com/oldnewthing/20060508-22/?p=...
In production, you should review the packages in your dependency tree and ensure that the exact version you reviewed is what you deploy. To that end, you should shrinkwrap your dependencies. Vendoring works well too. Shameless plug: for additional strictness in your shrinkwrap, you can use https://github.com/chromakode/exactly to store content hashes.
Social engineering is not accepted in many security bounties. Just saying...
Even if every new version of the total app is tested heavily before production, you lose the inherent stability of shipping the same code that is known stable from the users over time.
Others have said it is important to use new versions of dependencies to get the bug fixes but I don't see that as a good trade-off.
That mechanism could easily be used to achieve the same goal, even if there was no explicit "post-script" mechanism.
The modern hipster-language equivalent would probably be to make the package manager depend on the presence of Docker/rkt/systemd, and use it to pull down a dev-env container and build the native bindings in that.
For example "lowdash" instead of lodash.
https://github.com/mishoo/UglifyJS2/issues/936#issuecomment-...
https://github.com/samccone/The-cost-of-transpiling-es2015-i...
Or perhaps was it a security experiment to see how long it took someone to notice.
You're a doer - if you want to see something done about it at Facebook no one is stopping you from forking NPM or contributing code to it.