This is an AMP version of the article, its original content can be found here.

Every Build in Its Own Docker Container

Docker is a command line tool that can run a shell command in a virtual Linux, inside an isolated file system. Every time we build our projects, we want them to run in their own Docker containers. Take this Maven project for example:

This command will start a new Ubuntu system and execute mvn clean test inside it., our virtual assistant, does exactly that with our builds, when we deploy, package, test and merge them.

Why Docker?

What benefits does it give us? And why Docker, when there are many other virtualization technologies, like LXC, for example?

Well, there are a few very important benefits:

Let's discuss them in details.

Image Repository

Docker enables image sharing through its public repository at This means that after I prepare a working environment for my application, I make an image out of it and push it to the hub.

Let's say, I want my Maven build to be executed in a container with a pre-installed graphviz package (in order to enable dot command line tool). First, I would start a plain vanilla Ubuntu container, and install graphviz inside it:

I have a container that stopped a few seconds ago. Container's ID is 215d2696e8ad. Now, I want to make it reusable for all further tests in I have to create an image from it:

I just made my new commit to a new image yegor256/beta. This image can be reused right now. I can create a new container from this image and it will have graphviz installed inside!

Now it's time to share my image at Docker hub, in order to make it available for Rultor:

The last step is to configure Rultor to use this image in all builds. To do this, I will edit .rultor.yml in the root directory of my GitHub repository:

That's it. From now on, Rultor will use my custom Docker image with pre-installed graphviz, in every build (merge, release, deploy, etc.)

Moreover, if and when I want to add something else to the image, it's easy to do. Say, I want to install Ruby into my build image. I start a container from the image and install it (pay attention, I'm starting a container not from ubuntu image, as I did before, but from yegor256/beta):

You can now see that I have two containers. The first one is the one I am using right now; it contains Ruby. The second one is the one I was using before and it contains graphviz.

Now I have to commit again and push:

Thus, this Docker hub is a very convenient feature for Rultor and similar systems.


As you saw in the example above, every change to a Docker image has its own version (hash) and it's possible to track changes. It is also possible to roll back to any particular change.

Rultor is not using this functionality itself, but Rultor users are able to control their build configurations with much better precision.


Docker, unlike LXC or Vagrant, for example, is application-centric. This means that when we start a container—we start an application. With other virtualization technologies, when you get a virtual machine—you get a fully functional Unix environment, where you can login through SSH and do whatever you want.

Docker makes things simpler. It doesn't give you SSH access to container, but runs an application inside and shows you its output. This is exactly what we need in Rultor. We need to run an automated build (for example Maven or Bundler), see its output and get its exit code. If the code is not zero, we fail the build and report to the user.

This is how we run Maven build:

As you can see, Maven starts immediately. We don't worry about the internals of the container. We just start an application inside it.

Furthermore, thanks to the --rm option, the container gets destroyed immediately after Maven execution is finished.

This is what application-centric is about.

Our overall impression of Docker is highly positive.

ps. A compact version of this article was published at