Every once in a while I'll learn something new, apply it to something real, and make a difference. I'm no dev-ops guy but I have always wanted to learn the tricks to streamline the development process, one that painless and from the eyes of a developer. Four months ago, I learned Docker out of curiosity. I was once a skeptic but it's just a matter of a change in the way of thinking that made me decide to move over. In this article, I'll show you what changed and how I use Docker.
To compare containers against VMs, let's start by comparing how each works. A VM emulates an entire machine from hardware upwards. Each time you spawn an instance, it's like building a full physical machine in software. Containers work by taking the "greatest common denominator" of spawned VMs, hardware and the kernel, and delegate them to the host instead. What remains is what's left for the container to contain.
Hypervisor vv .----- Host ----||----------.--------- VM -------.------------. VM #1: | | || Hardware | Kernel | Bins/Libs | Your stuff | VM #2: | Hardware | OS || Hardware | Kernel | Bins/Libs | Your stuff | VM #3: | | || Hardware | Kernel | Bins/Libs | Your stuff | '----------'----''<------ this ----->'-----------'------------' | is now | v .<---- with in this ---->..-----------.------------. Container #1: | | || Bins/Libs | Your stuff | Container #2: | Hardware | OS || Bins/Libs | Your stuff | Container #3: | | || Bins/Libs | Your stuff | '--------- Host ---------''------ Container -------' ^^ Docker
Normally you have the local instance of your system run on your local machine and the remote instance well... on a remote machine. This is no different when working in a VM setup. You ssh into a VM and work inside it like a local machine while the remote stays remote. It's this way of thinking that makes it hard to move over to Docker. With this and the assumption that containers are like VMs, you'll and be tempted to treat a container like a VM and start to work inside the container.
Instead, think of Docker containers as remote machines. Say you have a Drupal website in production. Normally, you'll run Linux so let's say we're running Ubuntu. Apache and MySQL are rarely in the same machine, which makes it two Ubuntus. To manage your website via Drush, you need to install it locally and on the remote machine serving the site. Drush remote management works by forwarding Drush commands to the remote Drush via SSH which means we need an SSH server also running on that machine. The following ASCII art summarizes the setup so far.
___________________________ | The Cloud | | | | Ubuntu Ubuntu | | _________ _______ | | | Apache | <-> | MySQL | | | | Drupal | '-------' | | | Drush | | | | OpenSSH | | | '---------' | | ^ | '------|--------------------' | (ssh) ______|________ | Local|Machine | | ___|___ | | | Drush | | | '-------' | '---------------'
Now let's say instead of the traditional multi-machine setup, you used Docker containers. It should be no different except for the possibility that the remote may now be just one machine. Accessing it would still be the same via SSH, but now going inside the container.
___________________________ | Ubuntu in the Cloud | | | | Container Container | | _________ _______ | | | Apache | <-> | MySQL | | | | Drupal | '-------' | | | Drush | | | | OpenSSH | | | '---------' | | ^ | '------|--------------------' | (ssh) ______|________ | Local|Machine | | ___|___ | | | Drush | | | '-------' | '---------------'
Now since you can run Docker containers locally, the diagram for the Docker production setup shouldn't be any different for one running locally. The only difference now is that instead of Drush SSH'ing via a remote IP, it will now SSH via
127.0.0.1, the IP of your machine from it's point of view.
___________________________ | Local Machine | | | | Container Container | | _________ _______ | | | Apache | <-> | MySQL | | | | Drupal | '-------' | | | Drush | | | | OpenSSH | | | '---------' _______ | | ^ | Drush | | | | '-------' | '------|-------------|------' '-------------' (ssh)
For some people, development with an AMP stack involves using XAMPP. But your production environment is never XAMPP. Production environments are usually DIY servers, whose overall setup will be different from your local XAMPP development environment. If you're doing this, you're technically writing code against a one environment while deploying to a totally different environment. This can spell disaster during deploys, as well as cause maintenance issues in the long run.
On the other hand, in Docker, "What You Build Is What You Deploy". Unlike the traditional setup where you progress from development to production, a Docker setup is the other way around. You essentially build the production image first making it the standard setup. Then you model your development workflow around this standard image, adding auxiliary tools and procedures. But in essence, you're already developing code against the production setup.
In an AMP production environment, your Apache will most likely live in a separate machine from MySQL. They'll only see each other via the network. However, in a development environment this isn't usually the case. You'll most likely have Apache and MySQL live in the same machine, along with some other tools. That alone, you've already failed mimicking your production environment. You'll soon encounter issues that will only exist in one environment and never the other. Then you'll see WTFs and Works On My Machine™ flying around the team chat.
On Docker, containers are like separate machines connected via the network. Docker's linking functionality is actually a form of network link. If you inspect linked containers, their hosts files will have entries that point to the other containers. Even if you don't use Docker in production, you can still take advantage of this feature to emulate the production environment, allowing you to reproduce to some extent production problems.
I've been playing with Docker for around 4 months now, and my setups are gradually improving to meet the needs of my tasks. I can easily spawn and destroy multi-service setups, easily pull in images I need for services that I want running, and even let our sysadmin craft images so that I will never worry about a single configuration issue ever again. Best of it all, I can move projects around without ever hearing this again: "It Works On My Machine™".