Vagrant up up and away!
The primary output of my hackweek endeavours was a Docker Swarm cluster in a Vagrant environment. This post will go over how to get it spun up and then how to interact with it.
What is it?
This is a fully functional Docker Swarm cluster contained within a Vagrant environment. The environment consists of 4 nodes:
The Docker nodes (dockerhost01-3) are running the Docker daemon as well as a couple of supporting services. The main processes of interest on the Docker hosts are:
- Docker daemon: Running with a set of tags
- Registrator daemon: This daemon connects to Consul in order to register and
de-register containers that have their ports exposed. The entries from this
service can be seen under the
/servicespath in Consul’s key/value store
- Swarm client: The Swarm client is what maintains the list of Swarm nodes in
Consul. This list is kept under
/swarmand contains a list of
<ip>:<ports>of the Swarm nodes participating in the cluster
The Docker Swarm node (dockerswarm01) is also running a few services. Since this is just an example a lot of services have been condensed into a single machine. for production, I would not recommend this exact layout.
- Swarm daemon: Acting as master and listening on the network for Docker commands while proxying them to the Docker hosts
- Consul: A single node Consul instance is running. It’s UI is available at http://dockerswarm01/ui/#/test/
- Nginx: Proxying to Consul for the UI
How to provision the cluster
1. Setup pre-requirements
- The GitHub Repo: https://github.com/technolo-g/docker-swarm-demo
- Vagrant (latest): https://www.vagrantup.com/downloads.html
- Vagrant hosts plugin:
vagrant plugin install vagrant-hosts
- VirtualBox: https://www.virtualbox.org/wiki/Downloads
brew install ansible
- Host entries: Add the following lines to /etc/hosts:
10.100.199.200 dockerswarm01 10.100.199.201 dockerhost01 10.100.199.202 dockerhost02 10.100.199.203 dockerhost03
2a. Clone && Vagrant up (No TLS)
This process may take a while and will download a few gigs of data. In this case we are not using any TLS. If you want to use TLS with Swarm, go to 2b.
# Clone our repo git clone https://github.com/technolo-g/docker-swarm-demo.git cd docker-swarm-demo # Bring up the cluster with Vagrant vagrant up # Provision the host files on the vagrant hosts vagrant provision --provision-with hosts # Activate your enviornment source bin/env
2b. Clone && Vagrant up (With TLS)
This will generate certificates and bring up the cluster with TLS enabled.
# Clone our repo git clone https://github.com/technolo-g/docker-swarm-demo.git cd docker-swarm-demo # Generate Certs ./bin/gen_ssl.sh # Enable TLS for the cluster echo -e "use_tls: True\ndocker_port: 2376" > ansible/group_vars/all.yml # Bring up the cluster with Vagrant vagrant up # Provision the host files on the vagrant hosts vagrant provision --provision-with hosts # Activate your TLS enabled enviornment source bin/env_tls
3. Confirm it’s working
Now the cluster is provisioned and running, you should be able to confirm it. We’ll do that a few ways. First lets take a look with the Docker client:
$ docker version Client version: 1.4.1 Client API version: 1.16 Go version (client): go1.4 Git commit (client): 5bc2ff8 OS/Arch (client): darwin/amd64 Server version: swarm/0.0.1 Server API version: 1.16 Go version (server): go1.2.1 Git commit (server): n/a $ docker info Containers: 0 Nodes: 3 dockerhost02: 10.100.199.202:2376 dockerhost01: 10.100.199.201:2376 dockerhost03: 10.100.199.203:2376
Now browse to Consul at http://dockerswarm01/ui/#/test/kv/swarm/ and confirm that the Docker hosts are listed with their proper port like so:
The cluster seems to be alive, so let’s provision a (fake) app to it!
How to use it
You can now interact with the Swarm cluster to provision nodes. The images in this demo have been pulled down during the Vagrant provision so these commands should work in order to spin up 2x external proxy containers and 3x internal webapp containers. Two things to note about the commands:
- The constraints need to match tags that were assigned when Docker was started. This is how Swarm’s filter knows what Docker hosts are available for scheduling.
- The SERVICE_NAME variable is set for Registrator. Since we are using a generic container (nginx) we are instead specifying the service name in this manner.
# Primary load balancer docker run -d \ -e constraint:zone==external \ -e constraint:status==master \ -e SERVICE_NAME=proxy \ -p 80:80 \ nginx:latest # Secondary load balancer docker run -d \ -e constraint:zone==external \ -e constraint:status==non-master \ -e SERVICE_NAME=proxy \ -p 80:80 \ nginx:latest # 3 Instances of the webapp docker run -d \ -e constraint:zone==internal \ -e SERVICE_NAME=webapp \ -p 80 \ nginx:latest docker run -d \ -e constraint:zone==internal \ -e SERVICE_NAME=webapp \ -p 80 \ nginx:latest docker run -d \ -e constraint:zone==internal \ -e SERVICE_NAME=webapp \ -p 80 \ nginx:latest
Now if you do a
docker ps or browse to Consul here:
You can see the two services registered! Since the routing and service discovery part is extra credit, this app will not actually work but I think you get the idea.
I hope you have enjoyed this series on Docker Swarm. What I have discovered is Docker Swarm is a very promising application developed by a very fast moving team of great developers. I believe that it will change the way we treat our Docker hosts and will simplify things greatly when running complex applications.
All of the research behind these blog posts was made possible due to the awesome company I work for: Rally Software in Boulder, CO. We get at least 1 hack week per quarter and it enables us to hack on awesome things like Docker Swarm. If you would like to cut to the chase and directly start playing with a Vagrant example, here is the repo that is the output of my Q1 2014 hack week efforts: