MIT 6.S094 has a Dockerfile[^1] that contains all the software required for taking part in the class. This is a huge boon for getting stuck into the class and its coursework.
IMHO, every Dockerfile has left-pad written all over it.
Reproduciblity is all about the starting point. Computers are electronic, so if your computation requires high entropy from some random source and supposed next run there is not enough entropy your experiment may fail. But that's really really really a corner case. Docker image keeps the state of the starting point (kernel, packages, history of bashrc etc) are kept version controlled. It is as if someone gave you a copy of the virtualbox image.
So how do we lock down?
1) When you start with a Dockerfile, specify the version of the packages you are installing
2) When you want to reproduce, you can rebuild an image with that Dockerdile.
3) But most people are just going to use your image which is always the same now or next year. Building image != launching a container using an image.
There's been some effort individually to integrate Docker with Condor (after all, both are just processes running on some host machine).
* You can be sure that what you're running locally
is exactly what you'll be running on the server
* Your deployment experience will be the same
regardless of which tech stack you're using for the
web application
* There are many places you can deploy docker
containers (Google GCE, Amazon ECS, Amazon EB, etc.)
* A web application is often composed of several
services (e.g. the web app, a database, redis etc.)
and docker compose makes it easy to fire all of
those up in development e.g. if a new
developer joins, they only need to install
docker rather than web app framework +
database + redis
* Docker sets you up quite well to grow into a
more complex deployment (e.g. using Kubernetes)Running Docker in production takes a huge amount of effort to get right and is not easily done.
But apart from local development, I'd say that depends on your needs. If you want more ease-of-use, and you run a single-server hosting environment with multiple projects, it may be easier to keep doing that without adding Docker. But if you want increased security and better isolation between your projects, Docker is likely a better solution.
In any case, I would strongly recommend that you familiarize yourself with Docker, at least locally. After a while, you can decide if you want to take the leap and use it on your server as well.
Probably some bad setup of our part, but we've been using on production with kubernetes and none of those problems.
We're still using the compose to bootstrap database, caching, etc.
Anyway, Kubernetes is not so important for small deployments, but what I've found really helpful is CoreOS: an auto-updating base OS that gets out of the way and (more importantly) ships a combination of Linux kernel + Docker that usually works really well.
At least in my mind, it's much more simple to say "OK, I installed these packages, let me add that to Ansible" than it is to get a production-ready Docker setup going.
Running it in a simple production setup is simply writing a systemd/initd job which starts the container. No container management daemon or orchestration framework involved.
In a nutshell, this is why I'm now hooked on docker. I can reproducibly build things on my macbook without tearing up the system packages, and I can deploy them to my small datacenter without thinking twice.
I'd suggest you at least try it out.
Virtual machines will also work.
Docker serves as a lightweight virtualization that will provide the same experience, assuming you are willing to keep to the kernel and Docker version "in sync" between prod and local.
"Add -f to get rid of all unused images (ones with no containers running them)."
But the option is actually `-a` -- `-f` just simply skips the prompt.
docker rmi -af
I'm a bit confused by the backticks as I use them all the time scripting, but also in Markdown.
#stop and delete all containers
docker rm -f $(docker ps -a -q)
#delete all images
docker rmi -f $(docker images -q)
If you are gonna add a nuclear button, do it with a big red alert and give the option to whitelist some containers.
I suppose you could argue it might be nice to be able to do something like `docker container prune startsWith*` or something similar. But on the other hand, that functionality is already available -- just use `docker rm` with xargs or something.
For example I want to delete all old and all untagged versions of an image. I want to delete all stopped containers that use a specific image, or that were created more than two weeks ago. I want to delete all images starting with test.
Nuke everything? Not so much and to be honest this would be the easiest even with xargs and docker rm.
Also, when ran in bash but also supported by other shells, it supports regular expressions and extended pattern matching letting you for example specify not which files you want deleted, but which files you want not.
- Exposing the secrets on a (http) server that the Dockerfile can use to fetch
- What we use: Create a one time use secret that is destroyed after the image is built and before it is pushed.
This approach has sparked my interest, could you post an example of any open source docker-compose file and/or associated scripts that would do this?
Would have seemed more intuitive to me.