I run many websites/applications that need isolation from each other on a single server, but I just use the pretty-standard OpenVZ containers to deal with that (yes I know I could use KVM servers instead, but I haven't ran into any issues with VZ so far).
What's the difference between Docker and normal virtualization technology (OpenVZ/KVM)? Are there any good examples of when and where to use Docker over something like OpenVZ?
Docker is exactly like OpenVZ. It became popular because they really emphasize their OpenVZ Application Templates feature, and made it much more user friendly.
So users of Docker, instead of following this guide: https://openvz.org/Application_Templates
They write a Dockerfile, which in a simple case might be:
FROM nginx
COPY index.html /usr/share/nginx/html
So no fuzzing with finding a VE somewhere, downloading it customizing it, and then installing stuff manually, stopping the container and tarring it, Docker does that all for you when you run `docker build`.Then you can push your nice website container to the public registry, ssh to your machine and pull it from the registry. Of course you can have your own private registry (we do) so you can have proprietary docker containers that run your apps/sites.
From my perspective, the answer to your question would be: Always prefer Docker over OpenVZ, they are the same technology but Docker is easier to use.
But I've never really invested in OpenVZ so maybe there's some feature that Docker doesn't have.
They are built around the same set of kernel features and as far as I can see offer exactly the same abstractions.
For example Phusion Baseimage is a Docker baseimage that's similar to an OpenVZ container in the sense that it emulates a full running Ubuntu environment. It has is uses but the Docker community rather sees containers that encapsulate a single application with no external processes.
Docker does that, too. Actually, I'm running docker containers as a fast and easy replacement for VirtualBox VMs.
No security patching story at your workplace? No problem, containers don't have one either! If someone has shipped a container that embedded a vulnerable library, you better hope you can get a hold of them for a rebuild or you have to pull apart the image yourself. It's the static linking of the 21st century!
Doesn't Docker also help cause problems like ssh private key reuse? I am sure that there are mitigations, but it's sad to have ways to prevent some activity that the software makes easy to do.
I had the very same feeling. Containers are very useful, but the Docker suite of tools just don't have a very good security story.
docker is a glorified chroot and cgroup wrapper.
There is also a library of prebuilt docker images (think of it as a tar of a chroot) and a library of automated build instructions.
The library is the most compelling part of docker. everything else is basically a question of preference.
You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.
what docker is currently good for:
o micro-services that talk on a messaging queue
o supporting a dev environment
o build system hosts
However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.
There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)
Basically docker is an attempt at creating a traditional unix mainframe system (not that this was the intention) A large lump of processors and storage that is controlled by a singular CPU scheduler.
However, true HA clustering isn't easy. Fleet et al force the application to deal with hardware failures, whereas Vmware and KVm handle it in the hypervisor.
docker is a process container not a system container.
> docker is a glorified chroot and cgroup wrapper.
that is fairly immaterial, suffice to say that the underlying linux core tech that enables docker has matured enough lately to enable a tool like docker. I built many containers and I never thought about them in terms of the underlying tech.
> There is also a library of prebuilt docker images (think of it as a tar of a chroot)
yes
> and a library of automated build instructions
more accurate to say there is a well defined SDL for defining containers.
> You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.
have to agree, this is oversold as most of the config lives in attached volumes and needs to be managed outside of the container.
> However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.
Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.
> There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)
None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.
As a final note, all my docker hosts are kvm vm's.
> docker is a process container not a system container.
Valid. However the difference between docker image and openVZ images is the inclusion of an init system.
> Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.
I understand your point,
I much prefer each service having an IP that is registered to DNS. This means that I can hit up service.datacenter.company.com and get a valid service. (using well tested dns load balancing and health checks to remove or re-order individual nodes)
Its wonderfully transparent and doesn't require a special custom service discovery in both the client and service. because like etcd it has the concept of scope you can find local instances trivially. using DCHP you can say connect servicename and let dhcpd set your scope for you.
> None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.
This I have to disagree with you. For front end type applications, ones that hold no state, you are correct.
However for anything that requires shared state, or data its a bad thing. Take your standard database cluster ([no]SQL or whatever) of 5 machines. You are running at 80% capacity, and one of your hosts is starting to get overloaded. You can kill a node, start up a warm node on a fresh machine.
However now you are running at 100% capacity, and you now need to take some bandwidth to bring up a node to get back to 80%. Running extra machines for the purpose of allowing CPU load balancing aggrieves me.
I'm not advocating writing apps that cannot be restarted gracefully. I'm also not arguing against ephemeral containers, its more a case of easy load balancing, and disaster migration. Hot migration means that software is genuinely decoupled from the hardware.
Care to elaborate on this? Do you use the linking system described here? https://docs.docker.com/userguide/dockerlinks/
I mean, your various containers still communicate over IP, right? Just a private IP network within the host, rather than outside?
(Obviously I've never used Docker.)
I don't understand. If you can't assign IPs to each service (or it's difficult/unreliable to do so) how can processes talk to each other and the outside world?
You should have no way to login to the docker container
Dockers networking does not allow that for that IP to be visable.
This is the core that Docker solves, and in such a way that developers can do most of the dependency wrangling for me. I don't even mind Java anymore because the CLASSPATHs can be figured out once, documented in the Dockerfile in a repeatable programatic fashion, and then ignored.
In my opinion the rest of it is gravy. Nice tasty gravy, but I don't care so much about the rest at the moment.
Edit: As danesparz points out, nobody has mentioned immutable architecture. This is what we do at Clarify.io. See also: https://news.ycombinator.com/item?id=9845255
I don't really see the point of lightweight virtualization. It provides an illusion of isolation which will likely come crashing down at some probably very inconvenient point (e.g. when you discover a bug caused by a different version of glibc or a different kernel).
Packer is not quite an apt comparison, but would be a better comparison, than Vagrant.
The advantage is you do the steps that could possibly fail at build time. The downside is you need to learn to get away from doing runtime configuration for upgrades.
http://michaeldehaan.net/post/118717252307/immutable-infrast...
I wrote Ansible, and I wouldn't even want to use it in Docker context to build or deploy VMs if I could just write a docker file - assumes I might not need to template anything in it, probably. I would still use Ansible to set up my "under cloud" as it were, and I might possibly use it to control upgrades (container version swapping) - until software grows to control this better (it's getting there).
However, if you were developing in an environment that also wanted to target containers, using a playbook might be a good way to have something portable between a Docker file and Vagrant if a simple shell script and the Vagrant shell provisioner wouldn't do.
I'd venture in many cases it would.
I don't care about the isolation for isolation sake, I care about it for the artifact sake.
Agree and add that Python's paths (forget what they're called) (as well as Java CLASSPATHS) have been a problem for me on occasion too which means Docker would probably help with all these types of path issues.
Also, how do you manage your containers in production?
I'm not terribly fond of using environment variables for configuration, personally. That method requires either a startup shim or development work to make your program aware of the variables, and your container manager has to have access to all configuration values for the services it starts up.
Lots of people write their own scripts to do this; I wrote Tiller (http://github.com/markround/tiller) to standardise this usage pattern. While I have written a plugin to support storing values in a Zookeeper cluster, I tend to prefer to keep things in YAML files inside the container - at least, for small environments that don't change often. It just reduces a dependency on an external service, but I can see how useful it would be in larger, dynamic environments.
Management: We create AMIs using Packer. Packer runs a provisioning tool which downloads tho container and the configuration and sets up the process monitoring. It then builds a machine image, and then we launch new servers.
But it's not, at least in my experience; not to mention that as of now, anything running Docker in production (probably a bad idea) is wide open to the OpenSSL security flaw in versions of 1.0.1 and 1.0.2, despite the knowledge of this issue being out there for at least a few days.
Docker's currently "open" issue on github: https://github.com/docker/compose/issues/1601
Other references: https://mta.openssl.org/pipermail/openssl-announce/2015-July... http://blog.valbonne-consulting.com/2015/04/14/as-a-goat-im-...
That's a rather embittered perspective, ironic considering how new Linux is in the grand scheme of things. A more germane perspective is that Docker is a new tool which acknowledges that UX matters even for system tools.
Well, that's what I personally hoped. Then you run into problems, distro specific problems, and find yourself unable to deal with it without actually becoming great at linux under a deadline. Docker can actually introduce tremendous complexity at both the Linux and application level because you have to understand how an image was prepared in order to use it, configure it, etc. (Of course, a big part of the problem is that there's no way that I know of to interactively examine the filesystem of an image without actually running the image, and accessing the FS from the tools that the image itself runs. This has to be some sort of enormous oversight either on my part or on Docker's).
The number of tools which one would suggest you use along with Docker are a reflection of this, and are additional layers to try to provide further movement of host complexity up into a software controllable level (Consul, etcd, yada yada).
The whole ecosystem plays well with "cloud" hosts, because their systems people have taken the appropriate steps in creating that host architecture and complexity (which is not gone) for you.
As someone else stated well, it is the modern static linking. I have no idea why people would ever have done "build, test, build, deploy" - that sort of insanity should have been obviously wrong. However, "build, test, deploy" does not depend on static-ness of everything related to build, but compatibility of environment between "test" and "deploy". Those who invested not enough time in making sure these environments were always in sync I think have found a way to wipe the slate clean and use this to catch up to that requirement.
(1) You're exporting a container, not an image, so if you wanted to export your image, deploy it to a container first. Run echo or some other noop if you need to.
(2) This is similar to how git operates. You wouldn't want to examine your git commits interactively (assuming that means the ability to change them in place) well, if you did, git has --amend, but no such thing exists in Docker.
An image with a given id is supposed to be permanent and unchanging, containers change and can be re-committed, but images don't change. They just have children.
It can get hairy when you reach the image layer limit, because using up the last allowed image layer means you can't deploy to a container anymore. So, how do you export the image? 'docker save' -- but 'docker save' exports the image and all of its parent layers separately. (you need to flatten it, for example?)
I once wrote this horrible script[1] whose only purpose was unrolling this mess, since the latest image had the important state that I wanted in it, but I needed the whole image -- so, untar them all in reverse order and then you have the latest everything in a single directory that represents your image filesystem.
The horror of this script leads me to believe this is an oversight as well, but a wise docker guru probably once said "your first mistake was keeping any state in your container at all."
[1]: https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh
On one hand, this tends to offer a slightly stronger assurance against Linux-level security faults while also enabling the use of non-Linux stacks (such as BSD or Solaris or - God forbid - Windows, along with just-enough-OS (or no OS whatsoever)). Proper virtualization like this offers another layer of security, and it's generally perceived to be a stronger one.
On the other hand, the security benefits provided on an OS level (since now even an OS-level compromise won't affect the security of the whole system, at least not immediately) are now shunted over to the hypervisor. Additionally, the fuller virtualization incurs a slight performance penalty in some cases, and certainly includes the overhead of running the VM.
On the third hand, bare-metal hypervisors tend to be very similar to microkernels in terms of technical simplicity and compactness, thus gaining many of the inherent security/auditing advantages of a microkernel over a monolithic kernel. Additionally, in many (arguably most) environments, the slight degradation of performance (which isn't even guaranteed, mind you) is often much more tolerable than the risk of an OS-level bug compromising whole hosts, even if the risk of hypervisor-level bugs still exists.
The management tools are fairly decent, and the question "which CVEs are we vulnerable to our production environment" or "were are we still using Java 6" shouldn't be more than a keypress away.
Neither deb/rpm nor containers are an excuse for not using configuration management tools however. Don't believe anyone who says so.
You can tear down the host server, then recreate it with not much more than a `git clone` and `docker run`.
2. Precise test environment. I can mirror my entire production environment onto my laptop. No internet connection required! You can be on a train, on a plane, on the beach, in a log cabin in the woods, and have a complete testing environment available.
Docker is not a security technology. You still need to run each service on a separate host kernel, if you want them to be properly isolated.
This is a simple bind-mount and isn't special at all.
mount("/foo", "/container/foo", "none", MS_BIND);
Also, virtual machines have had things like 9p that allow the same thing.Docker gives you the ability to version your architecture and 'roll back' to a previous version of a container.
You shut down a VM and instruct the hypervisor system to take a "snapshot" which locks the original VHD file and creates a new one. When writes happen, they're performed on the new VHD, and reads have to use both the main and the snapshot VHD. And you can create a chain of snapshots, each pointing to the previous snapshot, for versioning. Or you can have several VM snapshots use the same master VHD, like for CI or data deduplication.
To roll back, it's usually as simple as shutting down the VM and removing the snapshot file.
The way VMs handle this doesn't carry the same semantics as the way you can with Docker. There's a finer-grain composability with Docker that is much more awkward with VMs.
Docker may not be as great as a virtualization tool as VMs -- security concerns, complexity, etc. -- but it is a much better package management tool.
You get most of the benefits of immutable builds anyhow by having scripts which can reliably set up servers from scratch on the fly.
- Docker is nothing new - it's a packaging of pre-existing technologies (cgroups, namespaces, AUFS) into a single place
- Docker has traction, ecosystem, community and support from big vendors
- Docker is _very_ fast and lightweight compared to VMs in terms of provisioning, memory usage, cpu usage and disk space
- Docker abstracts applications, not machines, which is good enough for many purposes
Some of these make a big difference in some contexts. I went to a talk where someone argued that Docker was 'just a packaging tool'. A sound argument, but packaging is a big deal!
Another common saw is "I can do this with a VM". Well, yes you can, but try spinning up 100 vms in a minute and see how your MacBook Air performs.
The down side about docker is that it takes longer to set up your docker in the first place, it is harder to keep secure, and it runs slower than a traditionally deployed application, but compared to VM deployed applications the performance is usually better.
OpenVZ, LXC, solaris zones and bsd jails on the other hand or mainly run complete OS and the focus is quite different from packaging your applications and deployments.
You can also have a look at this blog which explains the differences more in detail: http://blog.risingstack.com/operating-system-containers-vs-a...
Add in image registries and a decent CLI and the developer ergonomics are outstanding.
Technologies only attract buzz when they're accessible to mainstream developers on a mainstream platform. The web didn't matter until it was on Windows. Virtualization was irrelevant until it reached x86, containerization was irrelevant until it reached Linux.
Disclaimer: I work for a company, Pivotal, which has a more-than-passing interest in containers. I did a presentation on the history which you might find interesting: http://livestre.am/54NLn
The potential minimalism of a container is also an important concept to mention, with fast startup-times and less services that could potentially be vulnerable.
Application runtime dependencies are a common source of communication breakdowns between development and infrastructure teams. Making the application container a maintained build file on the project improves this communication.
docker provides:
* a standard format for building container images (the Dockerfile)
* a series of optimizations for working with images and containers (immutable architecture etc).
* a community of pre-built images
* a marketplace of hosting providers
All at the cost of linux only, which is ok for many shops.
Some people have mentioned security...patching in particular. Containers won't help if you don't have patching down. At the very least it lets you patch in the lab and easily promote the entire application into production.
I think that the security arguments are a canard. By making it easier and faster to deploy you should be able to patch app dependencies as well. I, for one, would automate the download and install of the latest version of all libs in a container as part of the app build process. Hell, build them all from source.
IT departments need to be able to easily move applications around instead of the crazy build docs that have been thrown over the wall for years.
If the docker tool will have something like `docker serve` and start his own local registry will be more than great.
For this case when I switch to Go was a great solution, building the binary is everything you need.
About docker being helpful for development, definitively yes, I switch to postgres, elasticsearch and redis containers instead of installing them on my computer, is easy to flush and restart and having different versions of services is also more manageable
What differentiates Docker is not virtualization, so much as package management. Docker is a package management tool that happens to allow you to execute the content of the package with some sort of isolation.
Further, when you look at it from that angle, you start seeing the flaws with it, as well as it's potential. It's no accident that Rocket and the Open Container Project are arising to standardize the container format. Other, less-well-known efforts include being able to distribute the container format themselves in a p2p distribution system, such as IPFS.
I ran through the same thing too. I used to work for Opscode. I joined them because I like the idea of "infrastructure-as-code." I remember when Docker came around, I was scratching my head. There was a part of me that thought it has something, and another part that was thinking, why would anyone want to use this? Wouldn't this set us back to the time when infrastructure is not code? I couldn't put my finger on it. And what's really funny is that the "container" metaphor explains this well -- and I had spent time reading up on the history of physical, intermodal containers and how they changed our global economy to boot. The primary point of intermodal containers isn't that it isolates goods from one merchant from another; it is that there is a standard size for containers that can be stacked in predictable ways, and moved from ship to train to truck quickly and efficiently. You are no longer loading and unloading pallets and individual goods; you are moving containers around. Package management. A lot of logistics companies at the time didn't get this either.
Most of the literature out there explains Docker as virtualization, or some confused mish-mash of "lightweight virtualization", or "being able to the move containers from one machine to the other." They pretty much circle around the central point of package management without nailing that jelly to the wall.
I guess most people don't need to do this...
I have a lot of interest in seeing our current devops tooling converging with AI/MI. I don't see that happening without something like Docker.
http://docs.vagrantup.com/v2/provisioning/docker.html
Docker is about running isolated environments in reproducible ways. I get a container working just so on my desktop, ship it to an internal registry, where it gets pulled to run on dev and qa. It works identically to how it works on my desktop, then I ship it to production. One image that works the same on all environments. That is what docker was for, developer productivity.
Docker took the LXC OS container template as a base, modified the container OS init to run a single app, builds the OS file system with layers of aufs, overlayfs, and disables storage persistence. And this is the app container.
This is an opinionated use case of containers that adds significant complexity, more a way to deploy app instances in a PAAS centric scenario.
A lot of confusion around containers is because of the absence of informed discussion on the merits or demerits of this approach and the understanding that you have easy to use OS containers like LXC that are perfectly usable by end users like VMs are, and then app containers that are doing a few more things on top of this.
You don't need to adopt Docker to get the benefits of containers, you adopt Docker to get the benefits of docker and often this distinction is not made.
A lot of users whose first introduction to containers is Docker tend to conflate Docker to containers, and thanks to some 'inaccurate' messaging from the Docker ecosystem think LXC is 'low level' or 'difficult' to use, Why would anyone try LXC if they think it's low level or difficult to use? But those who do will be pleasantly surprised how simple and straightforward it is.
For those who want to understand containers, without too much fuss, we have tried to provide a short overview in a single page in the link below.
https://www.flockport.com/containers-minus-the-hype
Disclosure - I run flockport.com that provides an app store based on LXC containers and tons of tutorials and guides on containers, that can hopefully promote more informed discussion.
But that was not the point. The point is you have always had perfectly usable end user containers from the LXC project even before Docker. Then a VC funded company Docker bases itself on LXC and markets itself aggressively and suddenly a lot of users think LXC is 'low level' or 'difficult to use'? This messaging is coming from the Docker ecosystem and the result is the user confusion we see on most container threads here.
Informed discussion means people know what OS containers are, what value they deliver, and what Docker adds on top of OS containers so there is less confusion and FUD, and users can make informed decisions without a monoculture being pushed by aggressive marketing.
But that discussion cannot happen if you are in a hurry to 'own' the container story and cannot acknowledge clearly alternatives exists and what value you are adding exactly on top. I see people struggling with single app containers, layers and lack of storage persistence when they are simply looking to run a container as a lightweight VM.
The 'open container movement' is yet one more attempt to 'own' the container technology and perpetuate the conflation of Docker to containers. How can a 'open container movement' exclude the LXC project that is responsible for the development of much of the container technology available today. It should ideally be called 'Open App Container' because there is a huge difference between app containers and OS containers. OS containers provide capabilities and deployment flexibility that app containers simply cannot give because they are a restrictive use case of OS containers. Containers technology as a whole cannot be reduced to a single PAAS centric use case.
I think thats the best way I can summarise what Docker _is_.
Where I've worked in the past, setting up a new development or production environment has been difficult and relied on half-documented steps, semi-maintained shell scripts, and so on. With a simple setup of a Dockerfile and a Makefile, projects can be booted by installing one program (Docker) and running "make".
You could do that with other tools as well, but Docker, and even moreso the emerging "standards" for container specification, seems like an excellent starting point.
https://github.com/mine-cetinkaya-rundel/useR-2015/blob/mast...
https://en.wikipedia.org/wiki/Operating-system-level_virtual...
Edit: formatting.
I don't think your statement is true at this point in time. OpenVZ is used by a ton of companies in the hosting industry and by large companies such as Groupon and smaller ones like TravisCI [1]. I would't make a statement that that Docker has a wider adoption than OpenVZ at this point in time. Maybe in five years, yes it may have a wider adoption than OpenVZ. OpenVZ and commercial VZ have been doing full OS containers since the early 2000s and it has the production track record to do very well in many server applications. I wouldn't hesitate to use it over Docker in production for my future projects.
[1]: http://changelog.travis-ci.com/post/45177235333/builds-now-r...
[1] http://blog.travis-ci.com/2014-12-17-faster-builds-with-cont...
https://engineering.groupon.com/2014/misc/dotci-and-docker/ http://www.meetup.com/Docker-Chicago/events/220936626/
Source: I work at 600 W Chicago, the Groupon World HQ, where they frequently host Docker meetups on the 6th floor.
I think it was here [1], but deleted now.
[1] http://ibuildthecloud.tumblr.com/post/63895248725/docker-is-...
TL;DR: It's better for deploying applications and running them than using home-made scripts.
Technologically, both OpenVZ and Docker are similar, i.e. they are containers -- isolated userspace instances, relying on Linux Kernel features such as namespaces. [Shameless plug: most of namespaces functionality is there because of OpenVZ engineers work on upstreaming]. Both Docker and OpenVZ has tools to set up and run containers. This is there the similarities end.
The differences are:
1 system containers vs application containers
OpenVZ containers are very much like VMs, except for the fact they are not VMs but containers, i.e. all containers on a host are running on top of one single kernel. Each OpenVZ container has everything (init, sshd, syslogd etc.) except the kernel (which is shared).
Docker containers are application containers, meaning Docker only runs a single app inside (i.e. a web server, a SQL server etc).
2 Custom kernel vs vanilla kernel
OpenVZ currently comes with its own kernel. 10 years ago there were very few container features in the upstream kernel, so OpenVZ has to provide their own kernel, patched for containers support. That support includes namespaces, resource management mechanisms (CPU scheduler, I/O scheduler, User Beancounters, two-level disk quota etc), virtualization of /proc and /sys, and live migration. Over ten years of work of OpenVZ kernel devs and other interesting parties (such as Google and IBM) a lot of this functionality is now available in the upstream Linux kernel. That opened a way for other container orchestration tools to exist -- including Docker, LXC, LXD, CoreOS etc. While there are many small things missing, the last big thing -- checkpointing and live migration -- was also recently implemented in upstream, see CRIU project (a subproject of OpenVZ, so another shameless plug -- it is OpenVZ who brought live migration to Docker). Still, OpenVZ comes with its own custom kernel, partly due to retain backward compatibility, partly due to some features still missing from the upstream kernel. Nowadays that kernel is optional but still highly recommended.
Docker, on the other side, runs on top of a recent upstream kernel, i.e. it does not need a custom kernel.
3 Scope
Docker has a broader scope than that of OpenVZ. OpenVZ just provides you with a way to run secure, isolated containers, manage those, tinker with resources, live migrate, snapshot, etc. But most of OpenVZ stuff is in the kernel.
Docker has some other things in store, such as Docker Hub -- a global repository of Docker images, Docker Swarm -- a clustering mechanism to work with a pool of Docker servers, etc.
4 Commercial stuff
OpenVZ is a base for commercial solution called Virtuozzo, which is not available for free but adds some more features, such as cluster filesystem for containers, rebootless kernel upgrades, more/better tools, better containers density etc. With Docker there's no such thing. I am not saying it's good or bad, just stating the difference.
This is probably it. Now, it's not that OpenVZ and Docker are opposed to each other, in fact we work together on a few things:
1. OpenVZ developers are authors of CRIU, P.Haul, and CRIU integration code in Docker's libcontainer. This is the software that enables checkpoint/restore support for Docker.
2. Docker containers can run inside OpenVZ containers (https://openvz.org/Docker_inside_CT)
3. OpenVZ devs are authors of libct, a C library to manage containers, a proposed replacement or addition to Docker's libcontainer. When using libct, you can use enhanced OpenVZ kernel for Docker containers.
There's more to come, stay tuned.