That said, this is an important question. We, particularly those us who work on critical infrastructure or software, should be asking ourselves this regularly to help prevent this type of thing.
Note that it's also easy (and similarly catastrophic) to swing too far the other way and approach all unknowns with automatic paranoia. We live in a world where we have to trust strangers every day, and if we lose that option completely then our civilization grinds to a halt.
But-- vigilance is warranted. I applaud these engineers who followed their instincts and dug into this. They all did us a huge service!
EDIT: wording, spelling
The current situation is ridiculous - if I pull in a compression library from npm, cargo or Python, why can that package interact with my network, make syscalls (as me) and read and write files on my computer? Leftpad shouldn’t be able to install crypto ransomware on my computer.
To solve that, package managers should include capability based security. I want to say “use this package from cargo, but refuse to compile or link into my binary any function which makes any syscall except for read and write. No open - if I want to compress or decompress a file, I’ll open the file myself and pass it in.” No messing with my filesystem. No network access. No raw asm, no trusted build scripts and no exec. What I allow is all you get.
The capability should be transitive. All dependencies of the package should be brought in under the same restriction.
In dynamic languages like (server side) JavaScript, I think this would have to be handled at runtime. We could add a capability parameter to all functions which issue syscalls (or do anything else that’s security sensitive). When the program starts, it gets an “everything” capability. That capability can be cloned and reduced to just the capabilities needed. (Think, pledge). If I want to talk to redis using a 3rd party library, I pass the redis package a capability which only allows it to open network connections. And only to this specific host on this specific port.
It wouldn’t stop all security problems. It might not even stop this one. But it would dramatically reduce the attack surface of badly behaving libraries.
It is hijacking a process that has network access at runtime not build time.
The build hack grabs files from the repo and inspects build parameters (in a benign way, everyone checks whether you are running on X platform etc)
Another way of thinking about the problem is that right now every line of code within a process runs with the same permissions. If we could restrict what 3rd party libraries can do - via checks either at build time or runtime - then supply chain attacks like this would be much harder to pull off.
so the problem is IFUNC mechanism, which has its valid uses but can be EASILY misused for any sort of attacks
The rule we need is "If I pull in library X with some capability set, then X can't do anything not explicitly allowed by the passed set of capabilities". The problem in C is that there is currently no straightforward way to firewall off different parts of a linux process from each other. And dynamic linking on linux is done by gluing together compiled artifacts - with no way to check or understand what assembly instructions any of those parts contain.
I see two ways to solve this generally:
- Statically - ie at compile time, the compiler annotates every method with a set of permissions it (recursively) requires. The program fails to compile if a method is called which requires permissions that the caller does not pass it. In rust for example, I could imagine cargo enforcing this for rust programs. But I think it would require some changes to the C language itself if we want to add capabilities there. Maybe some compiler extensions would be enough - but probably not given a C program could obfuscate which functions call which other functions.
- Dynamically. In this case, every linux system call is replaced with a new version which takes a capability object as a parameter. When the program starts, it is given a capability by the OS and it can then use that to make child capabilities passed to different libraries. I could imagine this working in python or javascript. But for this to work in C, we need to stop libraries from just scanning the process's memory and stealing capabilities from elsewhere in the program.
We should also be asking ourselves if we are working on critical infrastructure. Lasse Collin probably did not consider liblzma being loaded by sshd when vetting the new maintainer. Did the xz project ever agree to this responsibility?
We should also be asking ourselfs if each dependency of critical infrastructure is worth the risk. sshd linking libsystemd just to write a few bytes into an open fd is absurd. libsystemd pulling in liblzma because hey it also does compressed logging is absurd. Yet this kind of absurd dependency bloat is everywhere.
Enough to be cautious, enough to think about how to catch bad actors, not so much as to close yourself off and become a paranoid hermit.
North America is only about 5% of the world's population. [1] (We can assume that malicious actors are in North America, too, but this helps to adjust our perspective.)
The percentage of maliciousness on the Internet is much higher.
[1] _ See continental subregions. https://en.wikipedia.org/wiki/List_of_continents_and_contine...
> The percentage of maliciousness on the Internet is much higher.
A baseless assumption.
I've been evil, been wonderful, and indifferent at different stages in life.
I have known those who have done similar for money, fame, and boredom.
I think, given a backstory, incentive, opportunity, and resources it would be possible to most people to flip from wouldn't to enlisted.
Leverage has shown to be the biggest lever when it comes to compliance.