I'd like to see a formal container security grade that works like:
1) Curate a list of all known (container) exploits
2) Run each exploit in environments of increasing security like permissions-based, jail, Docker and emulator
3) The percentage of prevented exploits would be the score from 0-100%
Under this scheme, I'd expect naive attempts at containerization with permissions and jails to score around 0%, while Docker might be above 50% and Microsandbox could potentially reach 100%.This might satisfy some of our intuition around questions like "why not just use a jail?". Also the containers could run on a site on the open web as honeypots with cash or crypto prizes for pwning them to "prove" which containers achieve 100%.
We might also need to redefine what "secure" means, since exploits like Rowhammer and Spectre may make nearly all conventional and cloud computing insecure. Or maybe it's a moving target, like how 64 bit encryption might have once been considered secure but now we need 128 bit or higher.
Edit: the motivation behind this would be to find a container that's 100% secure without emulation, for performance and cost-savings benefits, as well as gaining insights into how to secure operating systems by containerizing their various services.
I think it's generally understood that any sort of kernel LPE can potentially (and therefore is generally considered to) lead to breaking all security boundaries on the local machine, since the kernel contains no internal security boundaries. That includes both containers, but also everything else such a user separation, hardware virtualization controlled by the local kernel, and kernel private secrets.
The only way to make Linux containers a meaningful sandbox is to drastically restrict the syscall API surface available to the sandboxee, which quickly reduces its value. It's no longer a "generic platform that you can throw any workload onto" but instead a bespoke thing that needs to be tuned and reconfigured for every usecase.
This is why you need virtualization. Until we have a properly hardened and memory safe OS, it's the only way. And if we do build such an OS it's unclear to me whether it will be faster than running MicroVMs on a Linux host.
For example there is Kata containers
This can be used with regular `podman` by just changing the container runtime so there’s no even need for any extra tooling
In theory you could shove the container runtime into something like k8s
Depends I guess as Android has had quite a bit of success with seccomp-bpf & Android-specific flavour of SELinux [0]
> Until we have a properly hardened and memory safe OS ... faster than running MicroVMs on a Linux host.
Andy Tanenbaum might say, Micro Kernels would do just as well.
The only meaningful difference is that Linux containers target partitioning Linux kernel services which is a shared-by-default/default-allow environment that was never designed for and has never achieved meaningful security. The number of vulnerabilities resulting from, "whoopsie, we forgot to partition shared service 123" would be hilarious if it were not a complete lapse of security engineering in a product people are convinced is adequate for security-critical applications.
Present a vulnerability assessment demonstrating a team of 10 with 3 years time (~10-30 M$, comparable to many commercially-motivated single-victim attacks these days) can find no vulnerabilities in your deployment or a formal proof of security and correctness otherwise we should stick with the default assumption that software if easily hacked instead of the extraordinary claim that demands extraordinary evidence.
Basically I'd love to see a giant ablation
Edit: when I say anything, I'm not talking user programs. I mean as in, before even the first instruction of the firmware -- before even the virtual disk file is zeroed out, in cases where it needs to be. You literally can't pause the VM during this interval because the window hasn't even popped up yet, and even when it has, you still can't for a while because it literally hasn't started running anything. So the kernel and even firmware initialization slowness are entirely irrelevant to my question.
Why is that?
You also need to start OS services, configure filesystems, prepare caches, configure networking, and so on. If you're not booting UKIs or similar tools, you'll also be loading a bootloader, then loading an initramfs into memory, then loading the main OS and starting the services you actually need, with eachsstep requiring certain daemons and hardware probes to work correctly.
There are tools to fix this problem. Amazon's Firecracker can start a Linux VM in a time similar to that of a container (milliseconds) by basically storing the initialized state of the VM and loading that into memory instead of actually performing a real boot. https://firecracker-microvm.github.io/
On Windows, I think it depends on the hypervisor you use. Hyper V has a pretty slow UEFI environment, its hard disk access always seems rather slow to me, and most Linux distro don't seem to package dedicated minimal kernels for it.
I'm saying it takes a long time for it to even execute a single instruction, in the BIOS itself. Even for the window to pop up, before you can even pause the VM (because it hasn't even started yet). What you're describing comes after all that, which I already understand and am not asking about.
In practice virtual machines are trying to emulate a lot of stuff that isn’t really needed but they’re doing it for compatibility.
If one builds a hypervisor which is optimized for startup speed and doesn’t need to support generalized legacy software then you can:
> Unlike traditional VMs that might take several seconds to start, Firecracker VMs can boot up in as little as 125ms.
Windows probably has an equivalent.
I'm using Hyper-V and I can connect through XRDP to a GUI Ubuntu 22 in 10 seconds and I can SSH into a Ubuntu 22 server in 3 seconds after start.
If I let a VM use most of my hardware, it takes a few seconds from start to login prompt, which is the same time it takes for my Arch desktop to boot from pressing the button to seeing the login prompt.
That's not what I'm asking.
I'm saying it takes a long time for it to even execute a single instruction, in the BIOS itself. Even for the window to pop up, before you can even pause the VM (because it hasn't even started yet). What you're describing comes after all that, which I already understand and am not asking about.
I'm the creator of microsandbox. If there is anything you need to know about the project, let me know.
This project is meant to make creating microvms from your machine as easy as using Docker containers.
Ask me anything.
Error: Sandbox is not started. Call start() first
Is there a suggested way of keeping a sandbox around for longer?The documented code pattern is this:
async def main():
async with PythonSandbox.create(name="my-sandbox") as sb:
exec = await sb.run("print('Hello, World!')")
print(await exec.output())
Due to the way my code works I want to instantiate the sandbox once for a specific class and then have multiple calls to it by class methods, which isn't a clean fit for that "async with" pattern.Any recommendations?
There is an example of that here:
https://github.com/microsandbox/microsandbox/blob/0c13fc27ab...
https://docs.python.org/3/library/contextlib.html#contextlib...
Question: How does networking work? Can I restrict/limit microvms so that they can only access public IP addresses? (or in other words... making sure the microvms can't access any local network IP addresses)
https://github.com/microsandbox/microsandbox/blob/0c13fc27ab...
How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?
Can I run a GUI inside of it?
Do you think of this as a new Vagrant?
How do I get data in/out?
It is a lighweight VM and uses the same technology as Firecracker
> Can I run a GUI inside of it?
It is planned but not yet implemented. But it is absolutely possible.
> Do you think of this as a new Vagrant?
I would consider Docker for VMs instead. In a similar way, it focuses on dev ops type use case like deplying apps, etc.
> How do I get data in/out?
There is an SDK and server that help does that and file streaming is planned. But right now, you can execute commands in the VM and get the result back via the server
1. each one should have it's own network config, eg so i can use wireguard or a vpn
2. gui pass-through to the host, eg wayland, for trusted tools, eg firefox, zoom or citrix
3. needs to be lightweight. eg gnome-boxes is dead simple to setup and run and it works, but the resource usage was noticeably higher than native
4. optional - more security is better (ie, i might run semi-untrusted software in one of them, eg from a github repo or npm), but i'm not expecting miracles and accept that escape is possible
5. optional - sharing disk with the host via COW would be nice, so i'd only need to install the env-specific packages, not the full OS
i'm currently working on a podman solution, and i believe that it will work (but rebuilding seems to hammer the network - i'm hoping i can tweak the layers to reduce this). does microsandbox offer any advantages for this use case ?
This is possible right now but the networking is not where I want it to be yet. It uses libkrun's default TSI impl; performant and simplifies setup but can be inflexible. I plan to implement an alternative user-space networking stack soon.
> 2. gui pass-through to the host, eg wayland, for trusted tools, eg firefox, zoom or citrix
We don't have GUI passthrough. VNC?
> 3. needs to be lightweight. eg gnome-boxes is dead simple to setup and run and it works, but the resource usage was noticeably higher than native
It is lightweight in the sense that it is not a full vm
> 4. optional - more security is better (ie, i might run semi-untrusted software in one of them, eg from a github repo or npm), but i'm not expecting miracles and accept that escape is possible
The security guarantees are similar to what typical VMs support. It is hardware-virtualized so I would say you should be fine.
> 5. optional - sharing disk with the host via COW would be nice, so i'd only need to install the env-specific packages, not the full OS
Yeah. It uses virtio-fs and has overlayfs on top of that for COW.
edit: A fleshed out contributors guide to add support for a new language would help. https://github.com/microsandbox/microsandbox/blob/main/CONTR...
More importantly is making sandboxing really accessible to AI devs with `msb server`.
Congratulations on the launch!
That said, hosting microVMs require dedicated hardware or VMs with nested virt support. Containers don’t have that problem.
Networking continues to be a pain but I'm open to suggestions.
Then the installation instructions include piping a remote script directly to Bash ... Oh irony ...
That said, the concept itself is intriguing.
I wonder how this compares to Orbstack's [0] tech stack on macOS, specifically the "Linux machines" [1] feature. Seems like Orb might reuse a single VM?
---
Firecracker is no different btw and E2B uses that for agentic AI workloads. Anyway, I don't have any major plan except fix some issues with the filesystem rn.
mount
you immediately see what I mean. Stuff that should be hidden is now in plain sight, and destroys the usefulness of simple system commands. And worse, the user can fiddle with the data structures. It's like giving the user peek and poke commands.The idea of containers is nice, but they are a hack until kernels are re-architected.
findmnt --real
It's part of linux-utils, so it is generally available wherever have a shell. The legacy tools you have in mind aren't ever going to be changed as you would wish, for reasons.Microsandbox does not offer a cloud solution. It is self-hosted, designed to do what E2B does, to make it easier working with microVM-based sandboxes on your local machine whether that is Linux, macOS or Windows (planned) and to seamlessly transition to prod.
> Do you also use Firecracker under the hood?
It uses libkrun.
I welcome alternatives. It's been tough wrestling with Firecracker and OCI images. Kata container is also tough.
We're building an IoT Cloud Platform, Fostrom[1] where we're using Javy to power our Actions infrastructure. But instead of compiling each Action's JS code to a Javy WASM module, I figured out a simpler way by creating a single WASM module with our wrapper code (which contains some further isolation and helpful functions), and we provide the user code as an input while executing the single pre-compiled WASM module.
That is an ideal use case
> Are there better alternatives?
Created microsandbox because I didn't find any
https://learn.microsoft.com/en-us/windows/security/applicati...
gVisor has performance problems, though. Their data shows 1/3rd the throughput vs. docker runtime for concurrent network calls--if that's an issue for your use-case.
What like about containers is how quickly I can run something, e.g. `docker run --rm ...` without having to specify disk size, amount of cpu cores, etc. I can then diff the state of the container with the image (and other things) to see what some program did while it ran.
So I basically want the same but instead with small vms to have better sandboxing. Sometimes I also use bwrap but it's not really intended to be used on the command line like that.
https://learn.microsoft.com/en-us/windows/security/applicati...
Can we build our own python sandbox using the sandboxfile spec? This is if I want to add my own packages. Would this be just having my own requirements file here - https://github.com/microsandbox/microsandbox/blob/main/MSB_V...
I want to run sandboxes based on Docker images that have Nix pre-installed. (Once the VM boots, apply the project-specific Flake, and then run Docker Compose for databases and other supporting services.) In theory, an easy-to-use, fully isolated dev environment that matches how I normally develop, except inside of a VM.
Overlays are always tough because docker doesn’t like you writing to the filesystem in the first place. The weapon if first result is deflection; tell them not to do it.
I had to put up with an old docker version that leaked overlay data for quite a while before we moved off prem.
Cloud Hypervisor and Firecracker both have an excellent reputation for ultra lightweight VM's. Both are usable in the very popular Kata Containers project (as well as other upstart VM's Dragonball, & StratoVirt). In us by for example the CNCF Confidential Containers https://github.com/kata-containers/kata-containers/blob/main... https://confidentialcontainers.org/
There's also smaller efforts such as firecracker-containerd or Virtink, both which bring OCI powered microvms into a Docker like position (easy to slot into Kubernetes), via Firecracker and Cloud Hypervisor respectively. https://github.com/smartxworks/virtink https://github.com/firecracker-microvm/firecracker-container...
Poking around under the hood, microsandbox appears to use krun. There is krunvm for OCI support (includes MacOS/arm64 support!). https://github.com/containers/krunvm https://github.com/slp/krun
The orientation as a safe sandbox for AI / MCP tools is a very nicely packaged looking experience, and very well marketred. Congratulations! I'm still not sure why this warrants being it's own project.
However, by looking at it and playing with a few simple examples, I think this is the one that looks the closest so far.
Definitely interested to see the FS support, and also some instruction on how to customize the images to e.g. pre-install common Python packages or Rust crates. As an example, I tried to use the MCP with some very typical use-cases for code-execution that OpenAI/Anthropic models would generate for data analysis, and they almost always include using numpy or a excel library, so you very quicly hit a wall here without the ability to include libraries.
That said I don't think either KataContainer or Cloud Hypervisor has first-class support for macOS.