The neat thing about this particular problem is that you can do some really coarse things and get some immediate benefit. Capabilities in their original form, and perhaps their truest form, carry down the call stack, so that code can do things like "restrict everything that I call can only append to files in this specific subtree" in the most sophisticated implementations. But you could do something more coarse with libraries and just do things like "These libraries can not access the network", and get big wins on some simple assertions. If you're a library for turning jpegs into their pixels, you don't need network access, and with only a bit more work, you shouldn't even need filesystem access (get passed in files if you need them, but no ability to spontaneously create files).
This would not be a complete solution, or perhaps even an adequate solution, but it would be a great bang-for-the-buck solution, and a great way to step into that world and immediately get benefits without requiring the entire world to immediately rewrite everything to use granular capabilities everywhere.
It is insane to me that in 2025 there is no easy way for me to run a program that, say, "can't touch the filesystem or network". As you say, even a few simple, very coarse grained categories of capabilities would be sufficient for 95% of cases.
https://www.freedesktop.org/software/systemd/man/latest/syst...
https://www.freedesktop.org/software/systemd/man/latest/syst...
sure, the command line get bit verbose but nothing that an alias or small wrapper couldn't solve
the big problem is that modern operating systems have huge surface area and applications tend to expect all sorts of things, so figuring out what you need to allow is often non-trivial
Well, OpenBSD has pledge(2) and unveil(2), both of which are very easy to use.
https://man.openbsd.org/pledge.2 https://man.openbsd.org/unveil.2
Like, I get it for a few things. But it is a short path to wanting access to files I have created for other things. Pictures and videos being obvious files that I likely want access to from applications that didn't create them.
Similarly, it is a short hop from "I don't want any other application to be able to control my browser" to "except for this accessibility application, and this password management application, and ..." As we push more and more to happen in the browser, it should be little surprise that we need more and more things to interact with said browser.
But to answer your question there are, eg, tons of programming packages in any language that I want purely for their computational abilities, and I know this for certain when using them. In fact for the vast majority of GUI programs I use, or programming packages I use, I know exactly what kind of permissions they need, and yet I cannot easily restrict them in those ways.
Why does “ping” need to have file system access?
Something dangerous like ffmpeg would be better if the codecs were running without access to files or the network, although you'd need a not fully sandboxed process to load the media in the first place.
Many things do need file access, but could work well with an already opened fd, rather than having to open things themselves (although forcing that results in terrible UX).
Of course, filesystem access gets tricky because of dynamic loading, but lets pretend away that for now.
Maybe if rsync were better designed exploits could be better contained; alas, there was a recent whoopsiedoodle—an error, as Dijkstra would call them—and rsync can read from and write to a lot of files, do internet things, execute whatever programs. A great gift to attackers.
It may help if the tool does one thing and one thing well (e.g. the unix model, as opposed to the "I can't believe it's not bloat!"™ model common elsewhere) as then you can restrict, say, ping to only what it needs to do, and if some dark patterner wants to shove ads, metrics, and tracking into ls(1) how about a big fat Greek "no" for those network requests. It may also help if the tool is designed (like, say, OpenSSH) to be well partitioned, and not (like, say, rsync) to need the entire unix kitchen.
Image libraries have had quite a few CVE or whoopsiedoodles over the years, so there could be good arguments made to not allow those portions of the code access to the network and filesystem. Or how about a big heap of slow and expensive formal verification… what's that, someone with crap security stole all your market share? Oh, well. Maybe some other decade.
A non-zero number of people feel that "active content" e.g. the modern web is one of the worst security missteps made in the last few decades. At least flash was gotten rid of. So many CVE.
P.S. web browsers have always sucked at text editing, so this was typed up in vi yielding a file for w3m to read. No, w3m can't do much of anything besides internet and access a few narrow bits of the filesystem. So, for me, web browsers are very much in the "don't want to access the filesystem" category. I can also see arguments for them not having (direct) access to the network, to avoid mixing the "parse the bodge that is HTML and pray there are no exploits" with the "has access to the network" bits of the code, but I've been too lazy to write that as a replacement for w3m.
Deno took a step in a good direction, but it was an opportunity to go much further and meaningfully address the problem, so I was a bit disappointed that it just controlled restrictions at the process level.
https://en.m.wikipedia.org/wiki/Security-Enhanced_Linux
To enable this at the programming level would require an enforcement mechanism at the level of a language VM or OS. It would require more overhead to enforce at that level, but the safety benefits within a language may be worth it.
Qubes OS exists for more than 10 years already. My daily driver, can't recommend it enough.
Or even, for that matter, some sane capabilities for browser extensions.
"These libraries can not access the network." No. "These libraries have not been given access to the network (and by default none are given access)."
From an implementation perspective, this is just passing in access rights as "local" resources instead of using "global" resources. For instance, it is self-evident that other code can not use your B-Tree local variable if you did not pass a reference to it to any called functions (assuming no arbitrary pointer casts). You just do the same with these "resources". It is just passing things to functions instead of relying on globals. The only difficulty is making these actions/resources "passable", which is trivial at the language-level, and "fine-grained/divisible" to avoid over-granting.
Capabilities taken literally are more of a network thing (it's how you prove you have access to a computer that doesn't trust you). On a language, you don't need the capabilities themselves.
Imagine these changes to a language, making it "capability safe":
- There is a `Network` object, and it's the only way to access the network. Likewise, there's a `Filesystem` object, and it's the only way to access the file system.
- Code cannot construct a `Network` or `Filesystem` object. Like, you just can't, there's no constructor.
- The `main()` function is passed a `Network` object and a `Filesystem` object.
Consider the consequences of this. The log4j vulnerability involved a logging library doing network access. In a capability safe language, this could only happen if you passed a `Network` object to the logger, either in its constructor or in one of its methods. So the vulnerability is still possible. But! Now you can't be surprised that your logger was accessing the network because you gave it the network. And a logger asking for network access is sufficiently sketchy that log4j almost certainly would have made network access optional for the few users that wanted that feature, which would have prevented the vulnerability for everyone else!
I talked about there being a single monolithic `Filesystem` object. That is what would be passed into `main()`, but you should also be able to make finer grained capabilities out of it. For example, you should be able to use the `Filesystem` object to construct an object that has read/write access to a directory and its subdirectories. So if a function takes one of these as an argument, you know its not mucking around outside of the directory you gave it.
Capability safety within a language is stronger than the sort of capability safety you can get at the process level or at the level of multiple machines. But we want those too!
Theseus OS is a research project that created single address space kernel that loads dynamic libraries written in safe-only Rust as "untrusted entities". Compiled with a trusted compiler that forbids unsafe. If they have no access to unsafe, and you're not giving them functions to link to that would hand them these a-capability-if-you-squint objects, they're supposedly sandboxed.
tomcat also gave different privilege levels to different parts of the classpath
nice idea, but didn't seem to work in practice
Yeah, this is not only an effects type system, it's a static effects type system.
You may be thinking of the term in a different context. In this context, they are a general security concept and definitely apply to more than the network, including languages:
https://en.wikipedia.org/wiki/Capability-based_security
http://habitatchronicles.com/2017/05/what-are-capabilities/ (this is a great article)
etc...
Why not? Sometimes I want that. Just today I was looking at some code and realized that it was being used by groups that shouldn't. The code needs to be there and is great for the correct users, but someone is going around our architecture and we didn't notice it.
This of course also depends on some related static guarantees that a function can't access ambient capabilities not passed directly in their arguments, but this is a much simpler static guarantee than effect typing and doesn't require specific analysis over normal parameter types.
so while I agree that language integration is really useful, I think you can get a lot out of appropriate support in the runtime, most notably the library loader.
As for transitive deps, I have some hopes for more "distro" like models to arise. In particular I see clear parallels in how traditional Linux distros work and how large faang style monorepos work. So maybe we could see more these sort of platforms/collections of curated libraries that avoid these deep transitive dep trees by hoisting all the deps into one cohesive global dependency that has at least some more eyes on it.
> 1. Performance. [...] Unix-esque shared memory, though much faster, is far too difficult to use reliably for untrusted components.
> 2. Expressivity. [...] without descending into the horrors that RPC (Remote Procedure Call) tends to descend to.
Author is asking for how can we do better / get this popular and used all over, not what's the current state of the art (when that SOTA is a notoriously large and complex program that your average programmer has never looked inside of).
https://chromium.googlesource.com/chromium/src/+/master/mojo...
In the past I used the Java SecurityManager to do a PoC PDF renderer that sandboxed the Apache PDFbox library. I lowered the privilege of the PDF rendering component and tested it with a PDF that exploited an old XXE vulnerability in the library. It worked! Unfortunately, figuring out how to do that wasn't easy and the Java maintainers got tired of maintaining something that wasn't widely used, so they ripped it out.
There are some conceptual and social difficulties with this kind of security, at some point I should write about them.
1. People get distracted by the idea of pure capability systems. See the thread above. Pure caps don't work, it's been tried and is an academic concept that nothing real uses but which sucks up energy and mental bandwidth.
2. People get distracted by speculation attacks. In process sandboxing can't stop sandboxed code from reading the address space, so this is often taken as meaning everything has to be multi-process (like Mojo). But that's not the case. Speculation attacks are read only attacks. To do anything useful you have to be able to exfiltrate data. A lot of libraries, if properly restricted, have no way to exfiltrate any data they speculatively access. So in-process sandboxing can still be highly useful even in the presence of Spectre. At the same time, some libraries DO need to be put into a separate address space, and it may even change depending on how it's used. Mojo is neat partly because it abstracts location, allowing components to run either inproc or outproc and the calling code doesn't know.
3. People get distracted by language neutrality. Once you add IPC you need RPC, and once you add RPC it's easy to end up making something that's fully language neutral so it turns into a fairly complex IDL driven system. The SecurityManager was nice because it didn't have this problem.
4. Kernel sandboxing is an absolute shitshow on every OS. Either the APIs are atrocious and impossible to figure out, or they're undocumented, or both, and none of them provide sandboxing at the right level e.g. you can't allow a process to make HTTP requests to specific hosts because the kernel doesn't know anything about HTTP.
Sandboxing libraries effectively is still IMHO an open research problem. The usability problems need someone to crack it, and it'll probably be language/runtime specific when they do even if libraries like Mojo are sitting there under the hood.
I’m one of those “distracted” people who has never gone far enough with a capabilities-based language to figure out why they don’t work.
I read a lot about E and tinkered with Pony, and it seems that the biggest barrier is bootstrapping/network effects (e.g. lack of available libraries & expertise) rather than something intrinsic to a capabilities oriented security model.
In contrast, I only just learned about Javas SecurityManager from an earlier comment on this post. Reading the documentation at https://docs.oracle.com/javase/8/docs/api/java/lang/Security... leaves me with an impression of more complexity for integrators (as you noted) and a need for vigilance at system boundaries by runtime implementors. Building on reference capabilities seems much much simpler, but maybe that’s just my lack of experience ?
> There are some conceptual and social difficulties with this kind of security, at some point I should write about them.
I guess my comment is a long way of saying: please do!
The problem with a pure caps system is that basically all software expects to have some ambient permissions. Changing that assumption not only breaks every API of every library, but reduces the usability by a lot too. For instance, it's commonplace for libraries to support tweaks to their behavior via environment variables or (in Java) system properties, but, those could also contain secrets, so that has to be a permission. In a pure capability design there's no way to grant a library the permissions it needs statically, instead a higher privileged component has to either do the read itself and pass the answer to the lower privileged part, or wrap its power into a capability object that exposes only what's needed. But then who writes that object? It's boilerplate so it'd make sense for it to come from the library itself, but, oh, that doesn't work because it's that code that's enforcing the security boundary. The user of the library has to write the code (or copy/paste and review it).
The SecurityManager design had some flaws but was basically a correct design for sandboxing most libraries (those that can't exfiltrate data obtained via side channel attacks):
1. You could assign libraries static sets of permissions, or not.
2. You could build strong capabilities using a mix of taking away a libraries reflection permissions, and AccessController.doPrivileged which terminated the intersection stack walk.
For instance, imagine a library that reads a config file (e.g. for logging configuration). There are two ways to handle this. You could change the library API so it requires a file capability. Java lets you do that - File[Channel] are capability objects. But, this is tedious for the developer. Now to use the library they have to write boilerplate to open the config file themselves so they can pass the opened capability in, which in turn means that basics like the location of the file can't be encapsulated in the library itself.
Alternatively you can just ship the library with a permissions file written in a simple DSL, that has a line like
grant my-lib open ~/.config/my-lib/logging.json
Now the library API doesn't have to change, the developer doesn't have to open the file for the library, and the permissions can evolve alongside the evolution of the logging feature itself. The library user only has to review the permissions file when it changes, if they care about sandboxing, or ignore it if they don't.This is much more usable! Capabilities in this model are still useful, but are the preserve of some special cases. For instance, complex permission checks can be done once and then encapsulated into a capability object (this is how file capabilities work, all the authorization logic is only on the open path and calling read or write is a much cheaper check). And of course APIs can be extended smoothly to support this, any time you have an API like:
void setConfigFilePath(String path);
you add another method void setConfigFile(File file);
and make the first version delegate to the second. Now the library can have its permissions removed and be given a file capability instead, if that makes more sense.This vulnerability seemed so apparent though. I used to object in the past about all the dependencies We were adding that we had no idea how they were implemented, who implemented them, or why they were implemented. I always got pushback that I was holding back the team for moving quickly and using the latest frameworks. it is important to consider that many of these security issues derived from the methods that new developers are taught, which is strap together a bunch of libraries that you only use 5 to 10% of resulting in a massive application with tons of unknown threats and poor performance by the way. I mostly blame the largest tech companies like Google, Meta the turn out large frameworks that are overkill for 90% of the development out there. New developers held these companies in extremely high regard and considered their technology to be state of the art in all sense. Yes, they were state-of-the-art for solving the problems and Google and Meta needed, but adoption of these technologies by small startups and other companies has now made the dependency explosion project endemic.
The worst violators of this principle that I’ve noticed is the proliferation of web development, frameworks, like rails, react, etc. further, it is ironic that these platforms, the web platform in particular is promoted as more secure relative to the old active X model of running binaries directly in the browser. However, I would rather run a trusted binary with 5k lines of code in a browser or app that my team has fully vetted than 1200 libraries and millions of lines of code to accomplish the same task.
Perhaps this is a good use for AI that would scan source code for library dependencies for security threats or potential security threats. Another solution would be to break out libraries into smaller components that perform specific functional tasks. These would be easier to validate and also result in smaller applications. Obviously there is no easy solution and running binary in your browser is not a great solution either. However, we as developers need to consider the trade-off between danger say running a compact native app versus “safety“ of using jack of all trade frameworks that include millions of lines of code.
Longer term, I think our hopes have to be at either cheaper MMU transitions between processes or in huge numbers of cheap cores.
And for both of those, we'll need fast message passing. Either good frameworks for shared memory based message passing using today's tech, ones that guide people away from TOCTOU attacks; OR, hardware support for message passing, for example some sort of MMU ownership transfer / write-once memory sealing / read-once-into-local-memory / pass-ownership-of-cacheline or such that makes it easier to implement securely. Or for the lots of cheap cores scenario, some kind of hardware messaging primitive between cores.
They say systems research is dead, I think it's just lacking funding, because all of the above sounds very much like a revival of stuff that was being researched earlier, with Barrelfish etc. The real challenge is changing the mainstream hardware. I find it hard to believe AMD isn't devoting a 5-person team for this sort of stuff, with the hope of discovering a "small feature" that could change the world of software. Or one of the RISC-V shops, though they have enough challenges already ahead of them.
The (vaporware) Mill CPU design has "portals" that are like fat function calls with byte-granularity MMU capability protection to limit mutual exposure between untrusting bits of of code on opposite sides of the portal. Think of it as cheap function-call-like syscalls into your kernel, but also usable for microkernel boundaries and pure userspace components.
https://www.youtube.com/watch?v=5osiYZV8n3U
Of course, we can't have nice things and are stuck with x86/arm/riscv, where it seems nothing better than CHERI is realistic and such security boundaries will suffer relatively-enormous TLB switching overheads.
Maybe the rust fondation should take over more of those fundamental crates when maintainers are not willing or able to continue working on them. A similar problem happened with the zip crate which was transferred rather fast to a new maintainer and most people still use a very old version pre-transfer.
Idea is if everyone does legwork to check his dependencies you can trust your dependencies because they checked theirs.
It is still trust but we go implicit into „hey you sure you checked dependencies and for sure you did not just npm install library some kid from Alaska created who pulled his dependency on kid from Montenegro?”.
Including random libraries just because we can and it had enough stars on GH was bad idea already - but nowadays it becomes an offense and rightly so.
For example, by trusting several big players such as Rust Foundation, Servo, Mozilla, Google, Facebook, etc. (developers will decide for themselves whom exactly they trust) who would manually review dependencies used by them we will be able to cover the most important parts of the ecosystem and developers will be able to review more minor dependencies in their projects themselves. cargo-crev + cargo-audit come somewhat close to what is needed, everything else is a matter of adoption.
Capabilities and other automated tools can help immensely with manual reviews, but can not replace them completely, especially in compiled languages like Rust.
>It’s easy for me to forget that the trust I place in those 20 direct dependencies is extended unchanged to the 161 indirect dependencies.
Unfortunately, when people mention such numbers they commonly do not account for the difference between "number of libraries in the dependency tree" and "number of groups responsible for the dependency tree". In practice, the latter number is often significantly smaller than the former. In other words, many projects (at least in the Rust world) lean towards the "micro-crate" approach, meaning that one group may be responsible for 20 crate in the dependency tree, which does not make security risks 20x bigger.
We take open source dependencies and:
- Patch vulnerabilities, including in transitive dependencies, even for old or EoL versions that teams may be on
- Continuously vet new versions that are published for malware; and don't bring them into the registry if so
- Inline dependencies that don't need to be separate packages (e.g. fold in trim-newlines, a 1-line NPM package, into a parent package) to simplify dependency trees
This is then available to developers as a one line change to switch to in e.g. package.json. Once you switch, you no longer need to manually review packages or do any of this scanning/upgrading/vulnerability management work, since you can trust and set policies on the registry.
We're in the very early days and working with a few future-minded developers to get feedback on the design. If you're interested, I'd love to share more! Please email me at neil@syntra.io
If I am to suggest something, I think you should consider opening some parts of your product, e.g. you could publish your package reviews with an N-months delay and accept public reviews from the community with some vetting process on top.
For example, by adding the aes-gcm crate to your dependencies you pull whooping 19 dependencies in total, but 13 of those are part of the RustCrypto org and remaining 5 are fairly "foundational" (libc, getrandom, cfg-if, typenum, subtle), meaning that security implications of such "big" dependency tree are not as bad as they seem at first glance.
Does that disqualify it as a potential path to a solution? How fine-grained would these components realistically need to be?
There is an assumption that your blog and your multi-billion SAAS should have transferrable skills. It's like expecting a person designing a shack and the person designing the next Fort Knox to use the same plans, materials, and people. Either you get extremely overbuilt shacks, with vault doors, separate HVAC and OSHA regulations that take decades to build and will cost you several billion dollars, or Fort Knox that anyone can kick down vault doors and steal money.
If your blog, or your 72h hackathon game takes 3000 dependencies, and maybe one of them is malicious (which is low probability) who cares?
If your multi-million SaaS has 3000 dependencies, yeah, it's time to slim it down. Granted, no one wants to do this because it costs money, and takes time away from shipping another feature.