For me personally, this is the biggest surprise and takeaway here. By simply having a key inside package.json's dependencies reference an existing NPM package, the NPM website links it up and counts it as a dependency, regardless of the actual value that the package references (which can be a URL to an entirely different package!). I think this puts an additional strain on an already fragile dependency ecosystem, and is quite avoidable with some checks and a little bit of UI work on NPM's side.
We could do a full write-up on npm's quirks and how one could take advantage of them to hide intent.
Consider the following from the post's package.json:
"axios": "https://registry.npmjs.org/@putrifransiska/kwonthol36/-/kwonthol36-1.1.4.tgz"
Here it's clear that the package links to something in a weird, non-standard way. A manual review would tell you that this is not axios.The package.json lets you link to things that aren't even on npm [1]. You could update this to something like:
"axios": "git://cdnnpmjs.com/axios"
And it becomes less clear that this is not the thing you were intending. But at least in this case, it's clear that you're hitting a git repository somewhere. What about if we update it to the following? "axios": "axiosjs/latest"
This would pull the package from GitHub, from the org named "axiosjs" and the project named "latest". This is much less clear and is part of the package.json spec [2]. Couple this with the fact that the npm website tells you the project depends on Axios, and I doubt many people would ever notice.[1] https://docs.npmjs.com/cli/v10/configuring-npm/package-json#...
[2] https://docs.npmjs.com/cli/v10/configuring-npm/package-json#...
https://www.npmjs.com/package/sournoise?activeTab=dependenci...
If it would show axios and link to the package provided in package.json, that at least would be better.
But here they actually link to the wrong package.
Agreed the website UX is confusing and could be better but in general package metadata is just whatever the publisher put there and it's up to you to verify if you care about veracity.
confusing is one thing, but there's a screaming security chasm around that innocent little UX problem.
MS bought npmjs and now it's LARPing as some serious ecosystem (by showing how many unresolved security notices installed packages have) while they cannot be arsed to correctly show what's actually in the metadata?
From https://docs.tea.xyz/tea/i-want-to.../faqs: "tea is a decentralized protocol secured by reputation and incentives. tea enhances the sustainability and integrity of the software supply chain by allowing open-source developers to capture the value they create in a trustless manner."
But... then why would I use their code if whatever value it creates is captured by them the developers and so I am no better from where I was? That's like paying your employees the additional value they produce instead of the market wages: you then literally have no reason to hire them since their work is exactly profit-neutral.
I wouldn't be surprised if principles in this case leave us with thousands of spam packages degrading the node ecosystem forever. It'd be exactly what I expect. So I guess I should thank the principle of consistency.
The unpublish document describes the options that users of NPM have to remove packages themselves. It was created after some situation where someone unpublished an important package.
A whole different set of terms governs which packages NPM can remove. This definitely includes these packages, either as "abusive" or "name squatting"
Not only that, but NPM's TOS makes it very clear that you have no recourse if they decide to remove your package for any reason.
Never let your principles get in the way of doing what is right.
- Isaac AsimovFor example, this[1] account mentioned in the article has 1781 packages of gibberish.
Also, the whole reporting process is onerous, there is a large form. Of course, gatekeeping on reporting is good, but there should be a possibility to report an entire profile of package publisher.
hmm, inspiring thoughts. An answer to "AI is going to replace software developers in the next 10 years" is to create 23487623856285628346 spam packages that contain pure garbage code. Humans will avoid, LLMs will hallucinate wildly.
I don't know how good the filters are though, since they're mostly powered by LLMs...
In your example that's just a pollution of the training set by spam, but that's not that much of an issue in practice, as AI has been better than humans at classifying spam for over a decade now.
If I agree with your definition of hallucinations in the context of LLMs... Then isn't your second paragraph literally just a way to artificially increase the likelihood of them occurring?
You seem to differentiate between a hallucination caused by poisoning the dataset vs a hallucination caused by correct data, but can you honestly make such a distinction considering just how much data goes into these models?
Frankly, hallucination as used with LLMs today is not even really a technical term at all. It literally just means "this particular randomly sampled stream of language produced sentences that communicate falsehoods".
There's a strong argument to be made that the word is actually dangerously misleading by implying that there's some difference between the functioning of a model while producing a hallucinatory sample vs when producing a non-hallucinatory sample. There's not. LLMs produce streams of language sampled from a probability distribution. As an unexpected side effect of producing coherent language these streams will often contain factual statements. Other times the stream contains statements that are untrue. "Hallucination" doesn't really exist as an identifiable concept within the architecture of the LLM, it's just a somewhat subjective judgement by humans of the language stream.
So many mangling of meaning.
Like the “AI” that detects spam is way different than LLMs.
1. a cryptocurrency scheme for funding OSS development[1] is incentivizing spammers to try and monetize NPM spam
2. it's easy to spoof your dependencies with package.json[2]
"dependencies": {
"axios": "https://registry.npmjs.org/@putrifransiska/kwonthol36/-/kwonthol36-1.1.4.tgz"
}
[1]: https://tea.xyz/blog/the-tea-protocol-tokenomicsfor example take mongoose
"resolved": "https://registry.npmjs.org/mongoose/-/mongoose-8.4.4.tgz",
"integrity": "sha512-Nya808odIJoHP4JuJKbWA2eIaerXieu59kE8pQlvJpUBoSKWUyhLji0g1WMVaYXWmzPYXP2Jd6XdR4KJE8RELw==",
so long as the integrity check passes for the resolve url npm will happily install it. for d in dependencies_from_package_json()
get_package(d)
if hash_package(d) != package_lock_hash(d)
error()
end
end
And not: use_package_lock_and_ignore_package_json_lol_fuck_you_haha_kthxbye()
I also discovered that npm doesn't actually verify what's in node_modules when using "npm install". I found this out a few ago after I had some corrupted files due to a flake internet connection. Hugely confusing. Also doesn't seem to be a straightforward way to check this (as near I could find in a few minutes).But luckily "npm audit" will warn us about 30 "high severity" ReDos "high impact" "vulnerabilities" that can never realistically be triggered and are not really a "vulnerability" in the first place, let alone a "high impact" one.
You just demonstrated the uglier package-manager-independent overrides(npm)/resolutions(yarn) aliternative method. Because for whatever reason they couldn't play nice with each other.
npmjs.com seems to be interpreting the field incorrectly but 1) AIUI that does not affect actual npm usage, 2) If you rely on that website for supply-chain-security input I have bridge to sell... Basically all the manifest metadata is taken as-is and if the facts are important they should be separately verified out-of-band. Publishers could arbitrarily assign unassociated authors, repo URL, and so on.
https://docs.npmjs.com/cli/v9/configuring-npm/package-json#o...
https://classic.yarnpkg.com/lang/en/docs/selective-version-r...
But following the links was fun and educational:
"The end goal here [of the Tea protocol] is the creation of a robust economy around open source software that accurately and proportionately rewards developers based on the value of their work through complex web3 mechanisms, programmable incentives, and decentralized governance."
Which lead to:
"The term cobra effect was coined by economist Horst Siebert based on an anecdotal occurrence in India during British rule. The British government, concerned about the number of venomous cobras in Delhi, offered a bounty for every dead cobra. Initially, this was a successful strategy; large numbers of snakes were killed for the reward. Eventually, however, people began to breed cobras for the income. When the government became aware of this, the reward program was scrapped. When cobra breeders set their snakes free, the wild cobra population further increased."
Which lead to:
"Goodhart's law is an adage often stated as, 'When a measure becomes a target, it ceases to be a good measure.'"
I reported some of them as spam, but there were hundreds of them. I couldn't figure out why somebody would waste the time to do that, but now it makes sense.
Package managers often comes with rating system. npmjs has weekly downloads, pull requests, and other popularity scores.
I am layman in AI, but why would anyone think that this would affect anything, like AI? Why would anyone train on noname package, that noone uses?
Stats for spam packages can have higher-than-none stats, but that also makes them vulnerable for sweep removal of all potential spam packages, since they are connected, etc. etc.
Any credible company will not use a noname spam package, will verify their contents. That is at least what happened in all companies I have worked for.
…almost certainly for the same reason that any “train AI using only good data, reduce hallucinations!” suggestion is in the “daydream” rather than “great idea” category.
Creating high quality filtered datasets is enormously more time consuming and expensive than just dumping everything you can get you hands on in.
It seems obvious to ignore packages that are obviously unused and spam, but tldr; no idiot is going to be pouring spam into npm unless there’s some kind of benefit from it; people accidentally using it, mixing it into the dependency tree of legit packages, etc.
It’s more likely that the successful folk doing this aren’t being caught, and the ones being caught are “me too” idiots. Or, the spam is working and people are actually (for whatever incomprehensible reason) actually using at least some of the packages.
TLDR; if dependency auditing and supply chain attack were trivial to solve, it wouldn’t be a problem.
…but based on the fact that we continue endlessly to see these issues, you can assume that it’s probably more diff to solve than it trivially appears.
Luckily, nobody thinks that tea ranking matters, except for the spammers themselves.
They are with no doubt attempting to poke at other more established metrics as well. This could eventually fool an AI or even humans.
Not that I disagree, but in the same line of thinking: Why would anyone train an LLM on some random blog written in broken English? Why would you train an LLM on the absolute dumpster fire that is Reddit comments? Or why is my Github repos with half-finished projects and horribly insecure coding practises being used as input to CoPilot? Yet here we are, LLMs writing broken, insecure code (just like a real person) and telling people to eat rocks.
The real fun would happen if the next incentive is to publish a package and get Github stars for that repo :-)
Completely terrifying to me.
Maybe the next step is to sell the control of all these packages to a rogue entity to be used for a supply chain attack?
A secured registry is long overdue, where every release gets an audit report verifying the code and authorship of a new release. It won't be nearly as fast as regular NPM package development but that's a good thing, this is intended for LTS versions for use in long-term software. It'd be a path to monetization as well, as the entities using a service like this is enterprise softare and both the author(s) of the package as the party doing the audit report would get a share.
Microsoft did exactly that (since they own both NPM and Github) by allowing you to verify the provenance of NPM packages built using Github Actions [1]. It's not required for all packages though. They've also started requiring all "high impact" packages to use two factor authentication [2].
[1] https://github.blog/security/supply-chain-security/introduci...
[2] https://github.blog/changelog/2022-11-01-high-impact-package...
Actually no, I just wonder why no one takes seriously these types of risks.
Supply chain attacks are a thing nowadays, but no one really cares, 6 months ago we had the xz attack but basically no one remember about it today.
The pulling in of unexpected dependent packages is a real issue though, how do other ecosystems deal with it? NPM is really missing some level of trust beyond just using "brand name" packages.
My general judgement is usually how often it's worked on/how many downloads it has but gut feel isn't really enough, is it?