If it hurts, do it more often.

Note: I only use docker on Linux (Ubuntu), so no idea about running docker on other OS.

I started using docker recently, because of the prompt availability of one 64bit box, and the overall experience has been pleasant. This post consists two parts, one quick start guide, bootstrapping in a few minutes, and one short essary covering some concepts used by docker.

§Quick start

§Dockerfile: the first script to get started

This file, that defines the environment you want to work with, should be able to create the environment from scratch on any PC with docker installed. One sample Dockerfile to setup bare ubuntu environment:

# Dockerfile
FROM ubuntu

# make sure the package repository is up to date
RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list
RUN apt-get update

# some useful app
RUN apt-get install -y clang haskell-platform make wget unzip valgrind

Then, build the project from this Dockerfile. Give it a tag name so that we can easily refer to it later.

docker build -t <tag> .

§Get our guest running

docker run --rm -i -t -u $(id -u) -v <src>:<dest> <tag> /bin/bash
  • --rm: This container will be removed once this command is finished, so this container is transient. I recommend we keep this option on every time we use the shell: we could try stuff out in the shell interactively, but we don’t want to do these more than once if they turn out to be useful, so put it into Dockerfile if it’s worth running for more than once.

  • -i: Interactively access this container, for it’s executing one shell. Without it, stdin is closed, and the shell exits immediately.

  • -t: Allocate tty so that we can see the output.

  • -v: Setup the shared volume so that data on host could be read in the container.

  • -u: Set the user of all the files created in the container, which is essential for file sharing between host and container.

§What is it for?

Docker could be used as one VM, managed by scripts (Dockerfile). It’s mostly useful to power users, and full-fledged VM (Virtualbox) is still preferred for casual users. I am sure you will find it useful if you use Linux as your dev OS.

§image vs container

  • image is not writable, and is meant to provide one stable starting point. (Think of immutable variable in computer languages.)

  • container is one ‘live’ image, meaning that it’s writable. It could work as one complete VM if one’s determined on configuring it that way. The following commands’ semantics mimics the way how we use PC, indicating that all the modification done inside this container is persistent between each run. I personally tend to not rely on this at all; purity rules for me:

      docker start <container>
      docker stop <container>
    

§Conclusion: why docker succeeds

Actually, being able to reproduce environment is not unique to docker; there’s npm for Node, and bundle for Ruby. However, neither npm nor bundle carries the encapsulation concept to the end, so they still suffer from the OS limitation, eg. the version of OS is too old. On the other hand, docker solves the problem from the root, complete virtualization for OS from users’ perspective.

Using docker we could produce whatever environment we have in mind, regardless of your OS version. Therefore, we could say that we reinstall our OS whenever we run docker build. Didn’t you complain on the pain of reinstallation? Now, you need to do it all the time. LOL.

§Reference

Docker Cheat Sheet