Exaforge

Cloud, DevOps, Evangelism

Finding: Docker May Increase Your Memory Requirements

docker-logo-loggedoutI've been experimenting with Docker lately (like much of the world) and specifically I wanted to try running this site (my blog, and some ancillary applications) using it as a place to start.

I figured this would be a perfect place to start, as the software involved (Wordpress, nginx, php-fpm, mysql) are all reasonably lightweight and easy. Additional, there are Docker Registry containers available for all of these.

Screen Shot 2014-10-13 at 3.54.40 PMI run my site on a simple DigitalOcean droplet, and historically have used either the 512M size, which has been great.

So, for this round, I decided to deploy a 1GB CoreOS droplet (which is designed for running nothing but Docker images), as it has pretty low resource utilization (among many other benefits).

What I discovered was interesting. I deployed the Wordpress docker image, which went fine. I then deployed MySQL image, which went fine as well. Lastly, I deployed an nginx reverse proxy then tried to start everything up. What I found was that not everything could start. I could either start wordpress and nginx, or wordpress and mysql or nginx & mysql, but not all three at once....I ran out of memory.

I felt like this was odd, because this was the exact same software I had been running on a 512MB droplet before under CentOS. I did a bit more digging, and found that this problem stems from the very nature of Docker Hub and how it works.

Each Docker image uses layered filesystems to progressively build an application up from the base OS (which could be CentOS, Ubuntu, etc) to add the application, its requirements and then its configuration.

Now, this seems great, until you realize that there is no standardization for these lower layers.

As an example, the ctlc/wordpress Dockerfile is based on ctlc/apache-php, which itself is based on ubuntu:trusty. mysql & ngnix are based on debian:wheezy. ctlc/wordpress also installs perl, and adds a apt-get -update command. The mysql image does a apt-get-update as well, but nginx does not.

All of this is hidden from casual inspection (you have to go check the Dockerfile). All of this means that, effectly, each of these Docker images/containers is running a different set of files & more importantly libraries...and so we have 3 copies of effectively-the-same-but-slightly-different residing in memory.

In theory, Docker can handle some of this with with equivalent of VMware's page sharing, but only if:

  1. You use the same base images (debian:wheezy, or ubuntu:trusty) and
  2. You use aufs (which CoreOS doesn't do by default - it uses btrfs)

So, if you use CoreOS as distributed today and use Docker Hub images (as most examples on the web do), you wont have the benefit, and are very likely to run into the same scenario I did where running with Docker requires more memory than without in a single-host environment.

An interesting caveat...and it was enough for me to decide that I wanted to save the memory (thus saving money) moreso than the deployment benefits of Docker.  Certainly, I could have gone and deployed aufs-based Docker base operating rather than CoreOS, and I could have rolled my own Docker images....and I would probably do that in a corporate environment.  But here, my whole focus was ease of use.

Now, I'm not an expert on Docker - I could have missed something here.  If I did, please let me know.