For the way most people use them in deployments of web apps, containers are too heavy. The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc. This has significant repercussions for the developers that have to deal with containers, IOW most backend developers.
Our view is that by restricting the scope of virtualization to something more specific to web app backends, we can build a tool that is much better for the job than containers. With WASM and WASI becoming stable, this is more possible than ever.
We’re very excited to share our ideas with you and would love to get your thoughts on this!
[1]: https://github.com/deislabs/wagi/blob/main/docs/writing_modu...
Windows, Linux, Mac, BeOS, Solaris, whatever else; they have different binary formats solely because they didn't want to allow binaries written for other systems to run. (there's more to it than this, but not much more) Docker just brings back what was taken away by abstracting those decisions out of the execution path and uses Linux instead. The stuff that was taken away? were those things taken away for a good reason? Not at all. incompatibility solely for incompatibilities' sake.
Why can't we just revisit that decision to introduce incompatibility on top of identical hardware? I feel like that is the correct way forward, rather than WASI. WASI will just replace Docker as a runtime for these things, and it will incur a performance penalty well beyond what Docker itself experiences, because it lives on top of the OS rather than using virtualization or isolation to go around the OS, in a manner of speaking.
Everyone these days seems to want necessarily worse performance at every opportunity. It is amazing what even a single core on my laptop can do in one second, but it takes a handful of seconds to launch any graphical application on my computer... apparently things are still too fast for the people stewarding this stuff.
But the reality is that containers and VMs work well enough that I think it's going to be a long, long time before the huge backlog of working software we have today gets pushed aside.
You're probably right when it comes to scale though. Portability is more important for apps than services.
That said, it would be cool to be able to develop a Rust service locally on my x86 machine then deploy a working/signed artifact directly to ARM servers.
I'm curious what shuttle does once your app is deployed? Is a container cloned with the app injected, then connected to a load balancer and left running? In Darklang we just keep the AST in the DB and fetch it each request, but at some point we'll want to do better.
We've got containers running for each instance of the runtime a user needs. And for now we just keep them running forever as long as the project is up. But that's very inefficient, and we're definitely looking for a better way to do it.
Might need to check again! :)
This sounds like J2EE application servers for WASM instead of Java byte code.
The problem addressed in the post is more about getting around Rust slow build time than phasing out containers. There's a lot of buzzwords (WASM, Rust, Cloud, etc.) but at the end of the day nothing that isn't easy to do with a modern "boring" stack like .NET.
but that wouldn't bring in VC money lol
For a lot of uses you can go even smaller than Alpine. Static executables for example don't require standard Linux tools to operate. I expect WASI would be similar. Just drop the runtime and your Wasm files into a scratch container.
> The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc.
> At shuttle we're convinced that a lot of the pains experienced by software engineers in the post-Docker world can be traced back to that very simple statement: containers are often too heavy for the job.
Why not use small lightweight containers? You can just have a layer with the WASM runtime.
As far as faster, responsible and stable - WASM currently is slower and less stable, and the container world has good options to provide improvements sandboxing/isolation (like gvisor, firecracker, etc).
In the example, is the .await in get_article run in a tokio executor inside wasm, or is did you write a way to poll the future across the wasm/host boundary? I'm curious if you could do something like tokio::spawn, or tokio::join from within the wasm code.
We initially thought about running tokio in wasm (they've added support for WASI recently as well IIRC), but elected against it because we found it just moved the compile time problem from one compilation target to another.
So in the end we decided to go with the second option. So when the guy in the example `.await`'s, context can go back to the runtime. But inside wasm this is not tokio, this is our own shim to the executor running on the outside.
What I mean is, what are the first principles behind it? What makes it different from other sandbox models?
* Using URL origins
* Not allowing any sort of host access
"you know how good software performs really well?"
"yeah"
"let's make our thing not like that, and use JavaScript."
"great idea."