Changing the base layer itself at a minimum requires people to upgrade, and now you don't have that advantage of the preexisting audience anymore. If your improvement requires a breaking change, then you're in a real pickle. So people will stack on more and more until it becomes more or less unbearable.
However, a minor improvement or bugfix should be done in the component that is responsible for it. Creating another layer instead of fixing the problem is just creating more problems.
Common Lisp is my go to example of this. It's possible to extend the language to support entirely new paradigms without breaking it, because of the power of the macro system.
You can't take an AWS VM and fire it up on DigitalOcean without trial by fire. VM's should be portable but they're not because cloud providers don't want them to be.
For a while everyone was locked into AWS but now that there's other options some companies want to hedge their bets, or even run parts of their workload locally.
Since the VM space is all proprietary now people are moving one level up and building a consistent environment above it. I fully expect docker to become proprietary in a couple years and the cycle will begin again.
I think there's interesting ideas of environments in other operating systems, but I'm not seeing how you'd make things better by flattening containers into the OS, per se.
Now of course the correct way to solve this would be (a) to make both programs use the same version of python-somelib, (b) make upgrades happen atomically, (c) for Python modules to actually have some API backwards compatibility. But in the absence of doing the right thing you can use a container to effectively static-link everything instead. And worry about the security/bloat/management another time.
The point isn't really to "run more Docker". It's to eliminate as much operating system overhead as possible, so that nearly every CPU cycle and byte of memory usage is dedicated to your containers.
I might give it a crack based on that. If you are hell bent on running Docker then an equivalent of the Ubuntu minimal install seems a good way to start.
If this idea has similar pragmatic advantages over the would-be best solution, then I'm game.
yep, when you want a distributed filesystem building it on top of a good and already working local filesystem is a pretty robust and cheap approach.
>smell like a system on top of a system to fix something that could be fixed in the system.
container layer is basically a distributed OS. Making a distributed OS by "fixing in the system" is pretty much non-starter ... or a huge academia project.
It's about kicking the question of static vs dynamic can up the stack, as now you have something that is a hodgepodge of dynamic bindings that seems to behave like something static as long as you do not look behind the curtains. Oh, and don't mind the turtles...
More like a virus.
$ cgcreate -g memory,cpu:groupname/foo
$ cgexec -g memory,cpu:groupname/foo bash
https://wiki.archlinux.org/index.php/cgroupsIt's the bare basic that libvirt and Docker et al are based anyway. So if you want to run just one process per "container" it seems rather logical to keep it simple and use cgroups commands directly. (Similar on Windows, using just sandboxie is so simple. Or do it like Android, execute every app/process with a different user.)
[1] https://www.freedesktop.org/software/systemd/man/systemd.exe...
* http://jdebp.eu./FGA/linux-control-groups-are-not-jobs.html
Genuinely curious, how is it "infinitely better"? I'm considering potentially switching away from Docker on my production boxes to something else, but I've mainly only heard about rkt when researching.
How and why?
You're being condescending. I like Docker and if anything you make me not want to try systemd-nspawn with this attitude.
I think the advent of Docker Swarm probably put a crimp on development and use of Rancher (the app). To me the way forward is Docker's own clustering tools, and the ease of standing up a cluster of Atom processors at www.packet.net where they install (as an option) RancherOS is very attractive.
Like, for example, containerize Skype, so that it can't read my home. Or contain Firefox to just read `~/.mozilla` and `~/downloads`.
For binary blobs I don't trust that much, I'd really value this.
For FLOSS stuff, it still provides protection from bugs.
I want to isolate data, no libraries. Libraries are there to be shared.
It needs to build on top of what we have, otherwise adoption will never take place.
how many layers of abstraction are necessary, and why?
Obviously virtualizing serves a valuable purpose.
Making development more accessible is great. Simplistic dev services like this mean reliance on others infrastructure, and being bound to cloud.
Doesn't seem forward thinking.
Can you imagine if Google had decided to run their search app on Microsoft servers?
I still strongly dislike "containers". It's not worth the complexity or instability. Two thumbs way down!
Does it though? I use CoreOS without containers (for the nice auto-updates/reboots), and it works really well with just systemd services. I'm aware the branding sells it this way (esp. the marketing rebrand as Container Linux or whatever), but does it run any containers as part of the base system? I've found CoreOS with containers not very reliable, and CoreOS without containers extremely reliable.
Since I use Go on servers which has pretty much zero dependencies, what I'd really like to see is the operating system reduced to a reliable set of device drivers (apologies to Andreessen), cloud config, auto-update and a process supervisor. That's it.
Even CoreOS comes with far too much junk - lots of binary utils that I don't need, and I'd prefer a much simpler supervisor than systemd. Nothing else required, not even containers - I can isolate on the server level instead when I need multiple instances, virtual servers are cheap.
CoreOS is the closest I've seen to this goal, the containers stuff I just ignored after a few tests with docker because unless you are running hundreds of processes, the tradeoff is not worth it IMO. Docker (the company) is not trustworthy enough to own this ecosystem, and Docker (the technology) is simply not reliable enough.
The OS for servers (and maybe even desktops) should be an essential but reliable component at the bottom of the chain, instead of constantly bundling more stuff and trying to move up the stack. Unfortunately there's no money in that.
https://en.wikipedia.org/wiki/Cgroups
https://en.wikipedia.org/wiki/FreeBSD_jail
https://en.wikipedia.org/wiki/Solaris_Containers
(more general: https://en.wikipedia.org/wiki/Security-Enhanced_Linux , https://en.wikipedia.org/wiki/Sandbox_(computer_security) )
I sat down one day to try to write down what would make Linux containers/orchestration usable and good, and realized after about 20 minutes that I was describing FreeBSD jails almost to a T. The sample configuration format I theorized is very close to the real one.
However, I think that there's good reason for actual deployments of containerized systems to remain niche, as it did until the VCs started dumping hundreds of millions into the current Docker hype-cycle, and the big non-Amazons jumped on board as a mechanism to try to get an advantage over AWS.
What people really want are true VMs nearly as lightweight and efficient as containerized systems. In fact, I think many people wrongly believe that's what containerized systems are.
We have a server that receives the logs from our kubernetes cluster via fluentd and parses/transforms them before shipping them out to a hosted log search backend thingy. This host has 5 Docker containers running fluentd receivers.
This works OK most of the time, but in some cases, particularly cases when the log volume is high and/or when a bug causes excessive writes to stdout/stderr (the container does have the appropriate log driver size setting configured at the Docker level), the container will cease to function. It cannot be accessed or controlled. docker-swarm will try but it cannot manipulate it. You can force kill the container in Docker, but then you can't bring the service/container back up because something doesn't get cleaned up right on Docker's insides. You have to restart the Docker daemon and then restart all of the containers with docker-swarm to get back to a good state. Due to https://github.com/moby/moby/issues/8795 , you also must manually run `conntrack -F` after restarting the Docker daemon (something that took some substantial debug/troubleshooting time to figure out).
We've had this happen on that server 3 times over the last month. That's ONE example. There are many more!
Containers are a VC-fueled fad. There are huge labor/complexity costs associated and relatively small gains. You're entering a sub-world with a bunch of layers to reimplement things for a containerized world, whereas the standard solutions have existed and worked well for many years, and the only reason not to use them is that the container platform doesn't accommodate them.
And what's the benefit? You get to run every application as a 120MB Docker image? You get to pay for space in a Docker Registry? Ostensibly you can fit a lot more applications onto a single machine (and correspondingly cut the ridiculous cloud costs that many companies pay because it's too hard to hire a couple of hardware jockeys or rent a server from a local colo), but you can also do this just fine without Docker.
Google is pushing containers hard because it's part of their strategy to challenge Amazon Cloud, not because it benefits the consumer.
- LinuxKit seems designed to be a piece that can be used to build a Linux distro but isn't a full distro out of the box like RancherOS
- As far as I know LinuxKit is still based on Alpine whereas RancherOS is custom and doesn't have much of a host filesystem
- LinuxKit is based on containerd and RancherOS is still based on Docker (though this is likely to change soon)
We're definitely interested in collaborating with LinuxKit since we do have similar goals. It's probably a good idea for us to write a more detailed blog post comparing the two since we've been getting this question pretty often lately.
Anyway, it seems it's design has inspired some people recently.
I haven't used RancherOS but CoreOS works mostly-fine. However, I would avoid using these things altogether because containerization sucks.
What people want is a mainframe where a lump of code is guaranteed to run the same everytime, regardless of the machine state, and if something goes wrong with the underlying layer is self heals(or automatically punts to a new machine, state intact).
What we have currently is a guarantee that the container will run, and once running will be terminated at an unknown time.
Mix in distributed systems(hard) atomic state handling (also hard) and scheduling(can be hard) its not all that fun to be productive for anything other than a basic website.
> it seemed logical and also it would really be bad if somebody did docker rm -f $(docker ps -qa) and deleted the entire OS
or are you asking why anyone would want a 'docker-os', which has everything but the docker daemon as a container?
I saw that line, but since if you want to run docker at scale you'd have each execution node under the tight control of a scheduler, it seemed like a small edge case.
As I said, for the ignorant such as myself why would I chose this over coreos?
I guess this is mostly so you don't accidentally delete OS-containers (like ntpd) when trying to delete all your containers.