You are on page 1of 90

Docker

Complete Guide

Easy Start:

from Zero to Professional User

By Ryan Lister

Copyright2017 by Ryan Lister

All Rights Reserved


Copyright 2017 by Ryan Lister

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any
form or by any means, including photocopying, recording, or other electronic or mechanical methods,
without the prior written permission of the author, except in the case of brief quotations embodied in
critical reviews and certain other noncommercial uses permitted by copyright law.
Table of Contents
Introduction
Chapter 1- Getting Started with the Docker
Chapter 2- Building a Docker Image
Chapter 3- The Docker Hub
Chapter 4- Creating Docker Environment in Azure
Chapter 5- Docker Machine and Azure Driver
Chapter 6- Changing the Default Subnet
Chapter 7- How to Run Services with Docker Swarm
Chapter 8- Building Docker Containers for WordPress
Chapter 9- Environment Variables with Docker and Elixir
Conclusion
Disclaimer

While all attempts have been made to verify the information provided in this book, the author does
assume any responsibility for errors, omissions, or contrary interpretations of the subject matter
contained within. The information provided in this book is for educational and entertainment purposes
only. The reader is responsible for his or her own actions and the author does not accept any

responsibilities for any liabilities or damages, real or perceived, resulting from the use of this
information.

The trademarks that are used are without any consent, and the publication of the trademark is
without permission or backing by the trademark owner. All trademarks and brands within this
book are for clarifying purposes only and are the owned by the owners themselves, not
affiliated with this document. **
Introduction

The Docker is a very useful tool in computing. The development of an application involves the use of
libraries, packages and other tools. These should be available when the application is being run, and
if not provided, the application will fail. This calls for you to come up with a way of packaging all of
these components so that they can be used together with the application on any platform. The Docker
is a tool that will help you to achieve this. This book guides you on how to use the Docker. Enjoy

reading!
Chapter 1- Getting Started with the Docker

What is Docker?

Docker is the leading software platform for containerization in the world. It makes it easy for you to
create, run and deploy applications using containers. The container makes it possible for a developer

to package his or her application with all the parts that are needed, including libraries as well as
other dependencies, and have all of these shipped as one package. With this, the developer will be
sure that the application will run successfully on any other machine running a similar operating
system, regardless of the settings it might have.

Docker for Mac

To begin using the Docker for Mac, you must begin by installing it. Installers for this can be

downloaded from stable or beta channel. The installation can then be done by following the steps
given below:

1. Double click on Docker.dmg to launch the installer and drag Moby the whale to your
Applications folder. You will then be prompted to authorize the execution or installation of the
Docker.app on your computer by typing your password.

2. Double click on Docker.app to launch the Docker. The whale found at the top status bar is an
indication that the Docker is running and that it can be accessed via the terminal.

If the installation was done successfully, a success message will also be presented to you,
giving you links to the documentation as well as the suggested next steps. You only have to

click on the whale located at the status bar and this will be dismissed.

3. Click on the whale to obtain the preferences as well as other options.

4. Choose About Docker and you can find out whether you are running the latest version or not.

To learn if the Docker, the Docker-machine and Docker-compose you have are the latest versions, run
the following commands:

$ docker --version
Docker version 1.12.0, build 8eab29e

$ docker-compose --version

docker-compose version 1.8.0, build f3628c7

$ docker-machine --version
docker-machine version 0.8.0, build b85aac1

Docker for Windows

Docker installers for the Windows operating system can be downloaded from stable or beta channel.
Once the download is complete, install it by following the steps given below:

1. Double click on the InstallDocker.msi so as to launch the installer.

2. Follow the instructions on the screen and accept the license. Then do authorization by typing
your password and proceed with the installation.

3. From the dialog box for set up complete, click on Finish to finish the installation.

Once the installation of the Docker has been completed, the Docker will automatically be launched.
You will see a whale on the status bar, which is an indication that your Docker is running and that you
can access it from the terminal. If the steps are completed successfully, you will get a message
congratulating you and a link leading to the documentation.

You can then open any shell that you like and use it to check for the versions of Docker, Docker-
machine and Docker-compose that you are running. The following commands will help you do this:

PS C:\Users\nicohsam> docker --version


Docker version 1.12.0, build 8eab29e, experimental

PS C:\Users\ nicohsam> docker-compose --version


docker-compose version 1.8.0, build d988a55

PS C:\Users\ nicohsam> docker-machine --version


docker-machine version 0.8.0, build b85aac1

Running the docker version command will give you the result given below:

Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3

Git commit: 8eab29e


Built: Thu Jan 26 21:04:48 2017
OS/Arch: windows/amd64
Experimental: true

Server:
Version: 1.12.0
API version: 1.24

Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jan 26 21:04:48 2017
OS/Arch: linux/amd64
Experimental: true

Note that all the details are shown above. The version of the Docker that you have installed on your
system should be given. You can also see the type of operating system on which you are running the
Docker. In our case, we are running it on a Windows operating system, 64-bit version, as shown
above. Those are the details about the client. The details of the server are also shown. Its version as
well as the API version and the type of operating system on which it is running are part of the output.

The docker info command gives the following result once executed:

Containers: 0
Running: 0

Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true

Logging Driver: json-file


Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp

Kernel Version: 4.4.16-moby

Operating System: Alpine Linux v3.4


OSType: linux

Architecture: x86_64
CPUs: 2
Total Memory: 1.95 GiB
Name: moby
ID: BG6O:2VMH:OLNV:DDLF:SCSV:URRH:BW6M:INBW:OLAC:J7PX:XZVL:ADNB

Docker Root Dir: /var/lib/docker


Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8

The details regarding the containers are presented as shown in the above output. As you can see, there
are no running, stopped or paused containers. The root directory for the docker container has been
shown, and this is /var/lib/docker/aufs. After backup for your data has been done, this will be
stored in an extfs (extension filesystem) format.

The version of the kernel is shown together with the type of operating system on which you are
running the Docker. Note that the values for the variables given above reflect the details of my Docker
and my computer. This means that they will also be specific based on the details of your computer and
the version of the Docker you are using.

To test if the Docker is capable of pulling an image from the Docker hub, you just have to run the
following command:

PS C:\Users\nicohsam> docker run hello-world

The command then gives the following:

Hello from Docker.

The message is just an indication that your Docker is running correctly, so the installation was
successful.
Docker for Linux

The Docker is supported on a number of Linux distributions. The installation instructions for the

Docker vary from one Linux distribution to another. In this case, we will show you how to install the
Docker in CentOS.

To be able to install the Docker, you should have a 64-bit OS and a kernel version that is not less than

3.10. If you need to know the version of the kernel that you are running, use the uname r command
as shown below:

$ uname -r
3.10.0-229.el7.x86_64

Its a good idea to update your system. As shown above, my kernel is okay for installation of the
Docker as it is not less than 3.10. Updating the system is an effective way of solving some of the
future bugs that might occur.

The Docker engine can then be installed using either of two methods. You can install the Docker using
yum package manager or curl. When using the second method, an installation script will be executed
and this also installs it by use of yum package manager.

Installation with yum

The following steps will help you install Docker via yum package manager:
1. Login to your computer and ensure that you have the root or sudo privileges.

2. Update your packages using the following command:

$ sudo yum update

3. Add yum repository as follows:

$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'


[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

As shown in the section for baseurl, the installation of the Docker is being done on the
CentOS 7.

4. Use the following command to install Docker package:

$ sudo yum install docker-engine

The command will serve to install the Docker container on your computer.
5. Enable service by executing the following command:

$ sudo systemctl enable docker.service

The command will help you enable the Docker service.

6. In the next step, you should start Docker daemon. The following command can help you do

this:

$ sudo systemctl start docker

Its now time for us to verify whether the installation of Docker on our system was successful or not.
To do this, we must run some text image on the container. The following command can help us do this:

$ sudo docker run --rm hello-world

The command has helped us run the hello-world image on the Docker container and it will help us
find out whether the installation was successful or not. The command gives me the following result:

Unable to find image 'hello-world:latest' locally


latest: Pulling from library/hello-world
b04b13da8e14: Pull complete
Digest: sha256:1256e8a36e2070f7cf2d0b0783dbacdd67723412411de4cdcf9761a1feb60fe4

Status: Downloaded newer image for hello-world:latest


Hello from Docker!

This shows that the Docker container was installed successfully and that it is running as expected.

What happened is that the Docker container pulled the image from the Docker repository. The image
was first searched locally but it was not found. This is because you had not downloaded this image
before. If you had downloaded it, you would have seen it here. The result of this was then printed on
the terminal.

If you need to add a HTTP proxy, create a different partition or directory for your Docker runtime
files, or make some other customizations.
Installation with Script

This type of installation can be done as shown below:

1. Login to your computer with the root or sudo privileges.

2. Update your packages to the latest version using the following command:

$ sudo yum update

3. Run the script to install the Docker. This can be done as shown below:

$ curl -fsSL https://get.docker.com/ | sh

The script works by adding docker.repo repository, and then it installs the Docker.

4. Use the command given below to enable the service:

$ sudo systemctl enable docker.service

5. Use the following command to launch the Docker daemon:

$ sudo systemctl start docker


6. Execute a test image on the container to test your Docker container. This can be done as

follows:

$ sudo docker run hello-world

You may need to add a HTTP proxy. You can do this by creating a new partition or directory for your
Docker runtime files or by doing some other customizations.
Chapter 2- Building a Docker Image

We want to improve the whalesay image by creating a new version of it that will only require a few

words to run.

Writing the Dockerfile

You should a short Dockerfile on a text editor. The Dockerfile should describe the software that will
be made to the image. It will be responsible for instructing the software on the commands to run or the
environment that is to be used. Follow the steps given below:

1. Open the terminal or the command window of your operating system.

2. Create a new directory using the mkdir (make directory) command. Give the directory the
name mydockerdirectory. The command should be as follows:

$ mkdir mydockerdirectory

The directory will then is used and work as the context for the build. This means that
everything you need for building your image will be contained here.

3. Change your directory to the new directory. This can be done using the cd (change directory)
command as shown below:
$ cd mydockerdirectory

At this point, our directory has nothing in it, as we have not added anything to it yet.

4. Create your Dockerfile inside this directory. You just have to use the touch command,
followed by the name of the file and then hit the RETURN key:

$ touch Dockerfile

You may think that the command does nothing within your directory, but the fact is that it will
create a file inside this directory. You can use the ls (list) command to check the contents of
the directory. This file will be available:

$ ls Dockerfile
Dockerfile

5. Launch your Dockerfile in your visual text editor of choice such as Sublime or Atom, or in a

text editor such as nano or vi. Add the following line to the file:

FROM docker/whalesay:latest

The FROM keyword tells the Docker the kind of image from which we will build our own
image. In our case, our image will be built from the whalesay image, and that is what has been
specified in the command given above.

6. The fortunes program can now be added to our image. This can be done as follows:
RUN apt-get -y update && apt-get install -y fortunes

This program will introduce a command to our file and this will be responsible for printing

wise sayings. Note that from the above command, we began by updating and we then installed
the program into our system. The software will have been installed into our image.

7. Since we have the software already in the image, we can instruct to execute once your image

has been loaded. This can be done using the following command:

CMD /usr/games/fortune -a | cowsay

This line is instructing our fortunes program that it should pass a quote to the cowsay program.
Your file should now have the following inside it:

FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes

CMD /usr/games/fortune -a | cowsay

8. You can save and then close your Dockerfile. You now have all ingredients as well as the
software behavior added to your file. It is now time to begin building your Docker image.

Building the Image

1. Navigate to the location in which you have put your Dokcerfile. Ensure that the Dockerfile is
accessible in that directory by using the cat command as shown below:

$ cat Dockerfile
FROM docker/whalesay:latest

RUN apt-get -y update && apt-get install -y fortunes

CMD /usr/games/fortune -a | cowsay

The cat command usually displays the contents of a particular file on the terminal. Once we used it
on the Dockerfile, the contents of the Dockerfile were displayed on the terminal.

2. We can now begin to build the image. This can be done by typing the command docker build -
t docker-whale on the terminal without forgetting the period (.). This is shown below:

$ docker build -t docker-whale .


Sending build context to Docker daemon 2.048 kB

...snip...
Removing intermediate container a8e6faa88df3
Successfully built 6d9485d03435

Note that the above command will take some time to run, so be patient until it prints the results
on the terminal. The image for the Docker will now have been built. Its time for you to learn
how the build process was done before you can begin to use it.
The Image Build Process

The command docker build -t docker-whale . works by taking Dockerfile from your current

directory and then building an image named docker-whale on the local machine. The command will
take about 1 minute to run, and its output will look long and somewhat complex. Lets explore the
meaning of these messages:

The Docker begins to check whether or not everything needed is available:

Sending build context to Docker daemon 2.048 kB

After that, it will load the whalesay image. This image is available locally, as we stated earlier. This
means that the Docker will not have to download this image.

FROM docker/whalesay:latest
---> fb434121fc77

The next step the Docker will have to do is to update the package manager using the apt-get
command. With this, a lot of lines will be listed, but you dont have to be concerned about them.

$ RUN apt-get -y update && apt-get install -y fortunes


---> Running in 27d224dfa5b2
Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease

Hit http://archive.ubuntu.com trusty Release.gpg

....snip...
Get:15 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [14.8 kB]
Get:16 http://archive.ubuntu.com trusty-security/universe amd64 Packages [134 kB]

Reading package lists...


---> eb06e47a01d2

The Docker has then installed the fortune software. The process for this is shown below:

Reading package lists...


Building dependency tree...
Reading state information...
The following extra packages will be installed:
fortune-mod fortunes-min librecode0
Suggested packages:
x11-utils bsdmainutils

The following NEW packages will be installed:


fortune-mod fortunes fortunes-min librecode0
0 upgraded, 4 newly installed, 0 to remove and 3 not upgraded.
Need to get 1961 kB of archives.
After this operation, 4817 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main librecode0 amd64 3.6-21 [771 kB]

...snip......
Setting up fortunes (1:1.99.1-7) ...
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
---> c81071adeeb5

Removing intermediate container 23aa52c1897c

The last step involves the Docker completing the build process and then reporting the result. This is

shown below:

CMD /usr/games/fortune -a | cowsay


---> Running in a8e6faa88df3

---> 6d9485d03435
Removing intermediate container a8e6faa88df3
Successfully built 6d9485d03435

As shown above, the Docker image was built successfully and its identifier is shown. If the process
was not successful, then you will see the respective error.
Running the New Docker Whale

To run the newly created image, you should first confirm that it is available on your computer and then

run it. The following steps will help you accomplish this:

1. Open the command line terminal.

2. Type the command docker images and then hit the Return key. In my case, the command
gives the following result:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

docker-whale latest 6d9485d03435 4 minutes ago 273.7 MB

docker/whalesay latest fb434121fc77 3 hours ago 247 MB

hello-world latest 81c63931e724 7 weeks ago 910 B

Note that the command returns the list of images that are available on the local machine. Note
that I only have three images on my local system. The image ids, as well as their sizes, have
also been shown.

3. You can then execute the new image. You only have to type the docker run docker-whale
command and then hit the Return key. This is shown below:

$ docker run docker-whale


Hit the Return key after typing the above command, and you will realize that there are some

things that the Docker will say. This is because you made it beautiful. Since the image is
available locally, you will notice that the Docker doesnt download anything to the local

computer.
Chapter 3- The Docker Hub

The Docker Hub is a registry service based on cloud and it enables you to establish links to the code

repositories, build and test own images, store images that were pushed manually and the links to the
Docker clouds, in order to deploy images to their hosts. The Docker provides us with a central
resource that can be used for image discovery, distribution and change management, workflow
automation and user and team management.

The following are the major features hat are provided by Docker Hub:

1. Image Repositories- this is a space or a huge storage in which you can push your images, pull
images from and manage the images in the accounts for which you have access permissions.

2. Automated Builds- these are for automated creation of images once changes are made to any
source code contained in the repository.

3. Webhooks- this allows you to invoke actions once an image has been pushed successfully to
the repository. It is used for automatic builds.

4. Organizations- this helps you make work groups for managing access to the image
repositories.

The Docker ID

To explore the Docker Hub, follow the directions that are provided in Docker ID to create an account.
You should be aware that it is possible to search for images and then pull them without having to

login. However, if you need to push an image, you must be logged in.

Your Docker ID usually gives you a single private Docker Hub repository and you dont have to pay

for it, meaning that it is provided for free. If you need additional private repositories, you have to
upgrade from the free account to a paid plan.
The Repositories

There are two ways to can find images and public repositories from Docker. First, you can choose to

search from the Docker Hub website. Secondly, you can choose to run the docker search command
on the Docker command line tool. A good example is when you are looking for Ubuntu image. In this
case, you can open your Docker command line tool and then type in the following command:

$ docker search Ubuntu

When you use either of these two methods, then the public repositories that are available on the
Docker Hub and match your search will be displayed. However, you should be aware that the private
repositories would not be shown in this type of search. If you need to view all the repositories that
you are permitted to access together with their status, then you have to open the Dashboard page
found in the Docker Hub.

The Docker Hub has multiple Official Repositories. These include the public and certified

repositories that come from the contributors and vendors of the Docker. They have images that you
can use to build your own applications.
Chapter 4- Creating Docker Environment in Azure

Azure provides us with numerous ways to deploy the Docker, and this is determined by what we

need. In this Chapter, we will show you how to use the Docker VM extension and the Azure Resource
Manager templates.

The Azure Docker VM extension installs and then configures the Docker client, Docker daemon and

the Docker compose on a Linux virtual machine (VM). By using the VM Docker extension, you have
more features and control compared to when using a Docker Machine or when creating a Docker host.
Additional features like the Docker Compose ensures that Azure Docker VM extension is more
suitable for production and more robust environments.

The Azure Resource Manager templates are used for defining the whole structure of the environment.
The templates are responsible for allowing you to create and then configure common resources like
Role-Bases Access Controls (RBACs), Storage, Docker Host VMs and diagnostics. You can choose
to reuse these templates to create additional deployments consistently.
Deployment of the Template

Well make use of an existing template to create a Ubuntu VM that makes use of Azure Docker VM

extension for installation and configuration of the Docker host.

You must use the latest Azure CLI and then login using Resource Manager Mode. This is shown
below:

azure config mode arm

The template should be deployed using Azure CLI, and the template URI should be specified. In the
example given below, we will create a group by the name myResourceGroup and this will be
located in the WestUS. You can choose the name that you need for the resource group and then use
the location that you need. This is shown below:

azure group create --name myResourceGroup --location "West US" \


--template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-
templates/master/docker-simple-on-ubuntu/azuredeploy.json

You will be prompted to answer some questions, provide the username and the password, as well as
a DNS name. You will get an output that resembles the one given below:

info: Executing command group create


+ Getting resource group myResourceGroup
+ Updating resource group myResourceGroup
info: Updated resource group myResourceGroup
info: Supply values for the following parameters

newStorageAccountName: storageAccount

adminUsername: nicohsam
adminPassword: password*

dnsNameForPublicIP: publicip
+ Initializing template configurations and parameters
+ Creating a deployment
info: Created template deployment "azuredeploy"
data: Id: /subscriptions/guid/resourceGroups/myResourceGroup

data: Name: myResourceGroup


data: Location: westus
data: Provisioning State: Succeeded
data: Tags: null
data:
info: group create command OK

The prompt will be returned to you after a few seconds but the process of creating the Docker host

will still be run in the background by Azure Docker VM extension. The deployment will take only a
few minutes before being completed. If you need to see the status for the Docker host, you have to use
the azure vm show command.

In the following example, we are checking for the status of a VM with the name aDockerVM, which
is located in the resource group by the name myResourceGroup. You can change these names to
reflect the true names in your system. The command is as follows:
azure vm show -g myResourceGroup -n aDockerVM

The azure vm show command usually gives a result similar to the one given below:

info: Executing command vm show


+ Looking up the VM "aDockerVM"
+ Looking up the NIC "aVMNicD"
+ Looking up the public ip "aPublicIPD"
data: Id
:/subscriptions/guid/resourceGroups/myresourcegroup/providers/Microsoft.Compute/virtualMach

data: ProvisioningState :Succeeded


data: Name :ADockerVM
data: Location :westus
data: Type :Microsoft.Compute/virtualMachines
[...]
data:
data: Network Profile:

data: Network Interfaces:


data: Network Interface #1:
data: Primary :true
data: MAC Address :00-0D-4A-12-D3-87
data: Provisioning State :Succeeded
data: Name :aVMNicD
data: Location :westus
data: Public IP address :13.92.107.234
data: FQDN :publicip.westus.cloudapp.azure.com]
data:

data: Diagnostics Instance View:

info: vm show command OK

The PrivisioningState parameter of the VM will be given at the top of the output. If you see the
status of this being succeeded, then you should know that everything executed successfully and it is
possible for you to perform an SSH to the VM.

The FQDN will be shown at the end of the output and this will show the qualified name for the
Docker host. This is the one we will be using to SSH to our Docker Host.
Deploying Nginx Container

Now that you are done with the deployment, you can perform an SSH to the new Docker host when

you are on your local computer. You should be aware of your username as well as the FQDN. The
SSH command should be as follows:

ssh nicohsam@apublicip.westus.cloudapp.azure.com

Note that nicohsam is my username, and apublicip refers to my FQDN. You have to replace these
parameters to reflect the relevant ones for your system.
Once the login runs successfully, we can go ahead to run a nginx container. This is shown below:

sudo docker run -d -p 80:80 nginx

This will download an image and the nginx container will then be started. The output will appear as
shown below:

Unable to find image 'nginx:latest' locally


latest: Pulling from library/nginx
afc26edd9597: Pull complete
b3ed98caeb01: Pull complete
a48df1751a97: Pull complete
8ddc2d7beb91: Pull complete
Digest: sha256:1ca2638e55319b7bc1d7d028234ea69c1368a35b01383e66dfe7e4f43780926c
Status: Downloaded newer image for nginx:latest
c6ec109fb743a762ff31a4606dd38d3e5b35aff43fa7f12e8a4ed1d920b0cd71

You can then use the following command to check for the status of any containers that are running on

the Docker:

sudo docker ps

The output from the above command will be similar to the one given below, and it will show you that

your nginx container is running as usual and that the TCP ports number 80 and 443 are being
forwarded:

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES

b3ed98caeb01 nginx "nginx -g 'daemon off" About a minute ago Up About a


minute 0.0.0.0:80->80/tcp, 443/tcp adoring_payne

You should verify from the browser that your nginx container is working well. Just open the browser
and then type the FQDN name for the Docker host. You will see the Nginx welcome page display on

the browser.

In the example, we have used a quickstart template. It is possible to make use of a Resource Manager
template that you own so as to deploy Azure Docker VM extension. To do this, you have to add the
following to the Resource Manager templates, and ensure that the vmName of the VM has been
defined properly. Here it is:

{
"type": "Microsoft.Compute/virtualMachines/extensions",

"name": "[concat(variables('vmName'), '/DockerExtension'))]",

"apiVersion": "2017-01-27-preview",
"location": "[parameters('location')]",

"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"

],
"properties": {

"publisher": "Microsoft.Azure.Extensions",
"type": "DockerExtension",
"typeHandlerVersion": "1.1",
"autoUpgradeMinorVersion": true,
"settings": {},
"protectedSettings": {}
}
}
As you can see in the first line of the template, it is an extensions template. The name of the template

has been given, as well as its corresponding version. The extension template has been published by
Microsoft Azure, and it is a Docker extension template. The setting of the
autoUpgradeMinorVersion parameter to true shows that the template will be upgraded
automatically.
Chapter 5- Docker Machine and Azure Driver

In Azure, we can create the Docker host VMs using the docker-machine create command. Using
azure driver as the option and other arguments should combine this. The -d flag represents the
driver option.

In the following example, we will be using the default values, but we will be opening port 80 on the
VM to internet so that testing can be done with nginx container. The user nicohsam will be made to
be the logon user for SSH and the new VM will be called machine.

If you need to view available options as well as their default values, just type docker-machine
create --driver azur command. This is shown below:

docker-machine create -d azure \

--azure-ssh-user nicohsam \

--azure-subscription-id <AZURE_SUBSCRIPTION_ID> \

--azure-open-port 80 \

Machine

As you can use, in the second line of the code, we have specified the user to use SSH, and this is the
user named nicohsam. This should be replaced with the appropriate name for the user you want to
permit.

The output for this will be determined by whether or not you have enabled a two-factor authentication

in the account. However, it should be related to the one given below:

Creating CA: /Users/user/.docker/machine/certs/ca.pem

Creating client certificate: /Users/user/.docker/machine/certs/cert.pem

Running pre-create checks...

(machine) Microsoft Azure: To sign in, use a web browser to open the page
https://aka.ms/devicelogin. Enter the code <code> to authenticate.

(machine) Completed machine pre-create checks.

Creating machine...

(machine) Querying existing resource group. name="machine"

(machine) Creating resource group. name="machine" location="eastus"

(machine) Configuring availability set. name="docker-machine"

(machine) Configuring network security group. name="machine-firewall" location="eastus"

(machine) Querying if virtual network already exists. name="docker-machine-vnet"


location="eastus"

(machine) Configuring subnet. name="docker-machine" vnet="docker-machine-vnet"


cidr="192.168.0.0/16"
(machine) Creating public IP address. name="machine-ip" static=false

(machine) Creating network interface. name="machine-nic"

(machine) Creating storage account. name="ghddolksdjalkjlmgyg7" location="eastus"


(machine) Creating virtual machine. name="machine" location="eastus" size="Standard_A2"
username="nicohsam" osImage="canonical:UbuntuServer:15.10:latest"

Waiting for machine to be running, this may take a few minutes...

Detecting operating system of created instance...

Waiting for SSH to be available...

Detecting the provisioner...

Provisioning with ubuntu(systemd)...

Installing Docker...

Copying certs to the local machine directory...

Copying certs to the remote machine...

Setting Docker configuration on the remote daemon...

Checking connection to Docker...

Docker is up and running!

To see how to connect your Docker Client to the Docker Engine running on this virtual machine,
run: docker-machine env machine

Note that the command has created a number of certificates, as shown in the above output. Just as in
the output, name of the certificate must end with a .pem extension. The pre-create checks are done so
that the machine can be sure that everything is okay before the Docker machine can be created. Once

this is completed, a message informing you of the same must be shown. In this case, the message is
(machine) Completed machine pre-create checks. The resource group that is being used in this case

is named machine. The network security has also been configured as the output indicates.

The virtual must also be checked to be sure that it exists. In this case it is named docker-machine-
vnet. The details regarding the network, such as the subnet mask, the public IP address and the

network interface, have also been created, as these are very essential for the Docker VM. Note the
name of the storage that has been created, as well as its location. It is after this that the process of
creating the virtual machine has been started.

After the virtual machine has been created, it has to be run immediately to begin the testing process.
The process of starting it can take a number of minutes. The operating system for the instance created
must be checked properly. The provisioning is also searched for, and then the process of installing the
Docker to the VM begins. The certificates that were created earlier on have to be copied to the
directory of the local machine, as well as to the directory of the remote machine. The configuration of

the Docker has then been copied to the daemon that is located remotely. The Docker will then be up
and running, and that is what the output indicates!

Configuring the Docker Shell

If you need to see what is expected to do a configuration on the Docker shell, use the following
command:
docker-machine env <VM name>

In our case, the name of the virtual machine is machine, and so, our command should be as follows:

docker-machine env machine

The command will return information regarding the environment, and this will be as shown below.
The IP address that is shown will be needed to test the VM. Here is the output from the code:

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://191.237.46.89:2365"
export DOCKER_CERT_PATH="/Users/nicoh/.docker/machine/machines/machine"

export DOCKER_MACHINE_NAME="machine"
# Run this command to configure your shell:
# eval $(docker-machine env machine)

Thats it! You can choose to run the configuration command that has been given above, or you can

choose to set the variables for the environment on your own.

Running the Container

Its time to run a simple web server and test whether everything is working as expected. We will use
a standard nginx image to instruct it to listen on the port 80, and if the VM is restarted, the container
should also be restarted. The following is the command that should be used:
docker run -d -p 80:80 --restart=always nginx

Thats it. Note that we have instructed the image to listen on the port 80, and the option --
restart=always is the one that makes the container restart once the VM is restarted. The output from
the command will be as shown below:

Unable to find image 'nginx:latest' locally

latest: Pulling from library/nginx

efd26ecc9548: Pull complete

a3ed95caeb02: Pull complete

83f52fbfa5f8: Pull complete

fa664caa1402: Pull complete

Digest: sha256:22137e06a75bda1022fbd4ea231f5545a1885aad4679e3921482db3b57383b3c

Status: Downloaded newer image for nginx:latest

15942c24d86fe27c688d0c08ad478f36cc9c16929b0e1c2971cb14eb4d0ab834

The first line in the above output shows that the image is not available in your local machine. Note
that first, it checks whether the image is available in the local machine. The image was not found, and
this explains the source of the message. Since the image is not available locally, it has to be pulled
from the repository. That is why you see the messages with pull. Once the image is completely
pulled, you are notified of the same via a message, as shown in the output.
Testing the Container

The running containers can be observed by running the docker ps command. In my case, the
command gives the following result:

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES

d5b78f27b335 nginx "nginx -g 'daemon off" 7 minutes ago Up 7 minutes


0.0.0.0:80->80/tcp, 443/tcp goofy_mahavira

If you have forgotten the IP address of the container that is running, you should run only the docker-
machine ip <VM name> command and this will be shown to you.
Chapter 6- Changing the Default Subnet

In the Docker, the subnet 172.17.0.0/16 is used as the default for the purpose of container networking.
If this subnet is not accessible or available to the Docker that you have installed on your machine,
simply because you have used it on your network, then you have to change the default value. This
process can be done across all the hosts found on the system, or in the hosts that have been deployed
to environments in which this kind of subnet is not available. If you have multi-host deployments, it is

not a requirement that all the hosts make use of a similar subnet so that Docker container
communications can be done.

To change this default subnet, do the following:

1. Stop Resource Manager services that are running on the host. If this is being executed on
master server, then the whole of the Resource manager application must be stopped.

2. Shut down the serviced and Docker by executing the following commands on the command
line:

$ systemctl stop serviced


$ systemctl stop docker

3. Remove the MASQUERADE rules from POSTROUTING in the iptables. The following
command can help you do this:
iptables -t nat -F POSTROUTING

4. Remove the IP address from Docker Bridge device. The following commands can help you do

this:

$ ip link set dev docker0 down


$ ip addr del 172.17.42.1/16 dev docker0

Note that in the first command, the bridge device was brought down. The IP address cannot be
removed when this device is up, so you should first put it down and then remove the ip
address. You should be aware of the IP address of the device as this is what you will use to
remove it from the device. In my case, I have this IP address as 172.17.42.1/16, where /16 is
the subnet mask for the IP address.

5. Pick any subnet from your collector to or from which you will not need to route. A /24 mask is

good unless you need to have over 255 containers in the host. In this example, I will be using
192.168.160.2/24:

$ ip addr add 192.168.168.2/24 dev docker0


$ ip link set dev docker0 up

6. Verify so that you can be sure that the interface is using the correct IP address. The following
command can help you in this:
$ ip addr show docker0

The command will give you a result that is related to the one given below:

docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state


DOWN
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.160.2/24 scope global docker0
valid_lft forever preferred_lft forever

As you can see in the output, the state is down. This is because the Docker has not been
started, which we will be doing in our next step.

7. You can then start the Docker. The following command can help you accomplish this:

$ systemctl start docker

At this point, the Docker will be up and running.

8. Verify that MASQUARADE rule for the new subnet was added to POSTROUTING chain. The
following command will help you accomplish this:

$ iptables -t nat -L n

The following should then be part of response for your Docker subnet:
Chain POSTROUTING (policy ACCEPT)

target prot opt source destination

MASQUERADE all -- 192.168.9.0/24 0.0.0.0/0


Since those are the expected results, we can assume that everything is okay. You can go ahead

and launch the services. This can be done using the command given below:

$ sytemctl start serviced

This will then be started.


Chapter 7- How to Run Services with Docker
Swarm

In this chapter, we will show you how to deploy a service using the Dockers swarm mode.

To deploy a service on the Docker Engine Swarm, we should first setup some swarm cluster. The

latest version of the Docker engine will also be installed. Note that we will be doing the installation
in Ubuntu 16.04. This means that we have to follow the standard installation method that is used in
Ubuntu. This method uses the Apt package manager. Since we need to install the Docker Engine of the
latest version, the Apt must be configured to pull docker-engine package from official Apt
repository of the Docker, rather than doing it from the repositories that have systems preconfigured.
Adding Public Key of the Docker

To configure the Apt to use a new repository, the first step should be adding the public key of the
repository into the cache of the Apt using the apt-key command. This can be achieved as follows:

# apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys


64118E43F3A912747C032ADBF64221590C52621C

Note that we have specified the key server and this will be done via the port 80. Also note that the
key has been specified and it is a long key. What is happening is that we are using the apt-key
command to request for the key we have specified from the key server. If we download any image
from the repository, then the public key will be used for validating it.
Specifying the Location of Dockers Repository

Now that we have imported the public key of the Docker, the Apt can be configured to make use of
repository server of the Docker project. This will be achieved by adding an entry in the
/etc/apt/sources.list.d/ directory. This is shown below:

# echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" >>


/etc/apt/sources.list.d/docker.list

After you refresh the package cache of the Apt, the Apt will look for all the packages located in the
sources.list.d/ directory to find any new package repositories. Our previous command is to create a
file named docker.list and this has an entry for adding a repository named apt.dockerproject.org.
Updating Package Cache of Apt

To refresh the package cache of the Apt, you have to run the apt-get command together with update
option. This is shown below:

# apt-get update

Note that the update option has been added to the end of the command. With the command, the Apt
will have to repopulate the list the repositories simply by rereading the configuration files, even the
ones that have been added recently. The repositories will also be queried to cache the list of all
packages that are available.
Installing prerequisite to linux-image-extra

Before the installation of the Docker Engine, we should begin by installing the prerequisite package.

The linux-image-extra is some kernel-specific package that is needed for the Ubuntu systems for the
purpose of supporting aufs storage driver. The Docker uses this driver for the purpose of mounting
volumes contained in the containers.

To install the package, we need to use the apt-get command, together with install option. This is
shown below:

# apt-get install linux-image-extra-$(uname -r)

In the above command, the option uname r will give us the version of the kernel that is running
currently. Any updates made to the kernel should involve installation of the necessary linux-image-

extra package version that coincides with your new kernel version. If you fail to update this, then you
may experience issues when the Docker needs to mount the volumes.

Installing the Docker Engine

Now that we have configured the Apt and installed linux-image-extra prerequisite package, we can
move and then install the Docker. This will be done using the apt-get command together with the
install option. This will help us to install the docker-engine. Just use the command given below:
# apt-get install docker-engine

The command will ensure that it installs the latest version of Docker engine on your system. If you
need to know the version of the Docker that you are running, you just have to execute the docker
command and then combine it with the version option. This is shown below:

# docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Sat Jan 28 13:11:10 2017
OS/Arch: linux/amd64

Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Sat Jan 28 13:11:10 2017
OS/Arch: linux/amd64
Those are the versions for both the client and the server. The version is 1.12.0, and this is the latest

version of the Docker engine at time of writing. You should get the latest version that does not require

it to be 1.12.0. It also indicates that the process of installing the Docker engine ran successfully. After
that, it is possible for us to move on and begin to create the Swarm.
The Docker Swarm

Our tasks will now be executed on a number of different machines. The swarm cluster will be started
with two nodes. These nodes will have the Docker Engine installed depending on our previous
instructions.

During the process of creation of the Swarm cluster, one of the nodes must be made the node manager.
In our case, the node named swarm-01 will be used as the node manager. To do this, first create the
Swarm Cluster by executing the command on this node. The command to be executed is the docker
command together with the swarm init options. This is shown below:

root@swarm-01:~# docker swarm init --advertise-addr 10.0.0.1


Swarm initialized: current node (bxwiap1z6vtxponawdsndl0a5) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join \


--token SWMTKN-1-51pzs5ax8dmp3h0ic72m9wq9vtagevp1ncrgik115qwo058ie6-
3fokbd3onl2i8r7dowtlwh7kb \

10.0.0.1:2377

To add a manager to this swarm, run the following command:

docker swarm join \


--token SWMTKN-1-51pzs5ax8dmp3h0ic72m9wq9vtagevp1ncrgik115qwo058ie6-
bwex7fd4u5aov4naa5trcxs34 \

10.0.0.1:2377

Note that other than the swarm init option, we have also used the flag --advertise-addr and this
has taken a value of 10.0.0.1, which is an IP address. This is the IP address that will be used by the
Swarm node manager for the purpose of advertising Swarm cluster service. Although it is possible to
use a private IP address for this, you must be aware that for your nodes to join the Swarm, they must

be able to connect to the node manager using this IP address. Note that this will be done on port 2377.

Once we execute the command docker swarm init, you will notice that the swarm-01 node was
named bxwiap1z6vtxponawdsndl0a5 and this has been already been made into the manager for the
swarm. As shown in the above output, we have also been provided with two commands: one of the
commands can be used for adding a node worker to swarm, and the second command can be used for
adding another node manager to our swarm.

With the Docker Swarm Mode, it is possible to have many node managers. However, one of these
will be have to be elected so that it can act as the primary node manager and it will be tasked with
orchestration in your Swarm.

Addition of Node Worker

Now that we have created our Swarm Cluster, a new node worker can be added using the docker
command that the output gave us once we created the Swarm cluster. This is shown below:
root@swarm-02:~# docker swarm join \
> --token SWMTKN-1-61pzs5ax9dmp3h0ic14m9wq7vtagevp1ncrgik234qwo091ie2-
4fokbd2onl2i8r6dowtlwh5kc \

> 10.0.0.1:2377
This node joined a swarm as a worker.

As shown in the above example, the command prompt has swarm-02, which means that we are

executing the command from swarm-02 node. Since we have executed the command, this node will
have been added to the cluster as the node worker. A node worker refers to a member of the Swarm
cluster that will be tasked with the responsibility of running tasks. In this case, the tasks will be run as
containers. The node manager will then be left with the work of orchestrating the tasks, as well as
management of the Swarm Cluster.

In addition to the duties of the node manager, the node manager is a node worker, meaning that it will
also be running the tasks for the Swarm Cluster.

At this point, our Swarm Cluster has two nodes in our Swarm Cluster. To see the status of our cluster,
we have to run the docker command together with the node ls options. This is shown below:

root@swarm-01:~# docker node ls


ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

12evr8hmiujjanbnu2n14dphj swarm-02.example.com Ready Active

bwex7fd4u5aov4naa5trcxs34 * swarm-01.example.com Ready Active


The output shows that both the swarm-01 and swarm-02 are Ready and in an active state. With this,

we are able to begin to install some services to our Swarm Cluster.


Creation of a Service

With the Docker Swarm Mode, the service refers to some long-running Docker container that you are
able to deploy to any of the node workers. Both the remote systems as well as the other containers are
able to connect to it and then begin to consume.

We will demonstrate how deploying a Redis service can do this.


Deployment of a Replicated Service

A replicated service refers to a Docker Swarm service with replicas of some specified number
running. The replicas are made up of multiple instances of a specified Docker container.

To create a new service, you must use the docker command and then specify the options for

service create. In our command given below, we will be creating a service with the name redis
and this will have 2 replicas and the port number 6379 will be published across the cluster. Here is
the command:

root@swarm-01:~# docker service create --name redis --replicas 2 --publish 6379:6379 redis

fr256pvukeqdev12nfmh7q3kr

Other than the specification of the service create options, we have used the flag name to give

the service the name redis, and the flag replicas to specify that our service will be running on
2 different nodes. If we need to validate whether or not our service is running on both of the nodes,
we have to execute the docker command together with service ls options. This is shown below:

root@swarm-01:~# docker service ls


ID NAME REPLICAS IMAGE COMMAND
fr256pvukeqd redis 2/2 redis
As shown in the above output from the command, the 2/2 indicates there are 2 services in the system

and 2 of these are running. To see more details as far as these services are concerned, we just have to

run the docker command together with service ps option. Here is this command together with its
output:

root@swarm-01:~# docker service ps redis


ID NAME IMAGE NODE DESIRED STATE CURRENT
STATE ERROR

6lr23nbpy43csmc87cew2cul2 redis.1 redis swarm-02.example.com Running Running 30


minutes ago

2t77jsg90qajxxdekbenl3pgj redis.2 redis swarm-01.example.com Running Running 30


minutes ago

The use of service ps option is responsible for showing us the tasks for our specified services. In
this, it is very clear that the redis service has task or container that is running on both of the swarm
nodes.
Connection to Redis Service

Now that we are sure that our service is running, we can use a remote system and then connect to this
service using a redis-cli client. This can be done as follows:

vagrant@vagrant:~$ redis-cli -h swarm-01.example.com -p 6379

swarm-01.example.com:6379>

As seen in the output, our connection to redis service was successful. This is an indication that our
service is up and available.

Publishing Services

During the creation of the redis service, we used the publish flag in docker service create

command. The flag is responsible for telling the Docker to publish the 6379 port as a port that is
available for redis service.

For the Docker to publish a port for a service, it must listen to all of these ports across all the nodes
that are available in the Swarm Cluster. If there is some traffic on that port, it will be redirected to a
container that is running for the service. To demonstrate how this works, we will have to add a new
node to our Swarm cluster.
To add this new node, we will have to follow the steps we did previously. Please refer to the relevant

previous instructions. Once you have added the new node, you can check for the status of the nodes

using the following command:

root@swarm-01:~# docker node ls


ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

12evr8hmiujjanbnu2n14dphj swarm-02.example.com Ready Active

bwex7fd4u5aov4naa5trcxs34 * swarm-01.example.com Ready Active

Leader
f4ymm23081ooms0gs4iyn8vtj swarm-03.example.com Ready Active

From the above output, it is very clear that there are three hosts in our cluster. Remember that once
we created a service that had two replicas, a task or container was created on the swarm-01 and
swarm-02. Now that we have created a new node worker, lets determine whether this is what has
happened again:

root@swarm-01:~# docker service ps redis


ID NAME IMAGE NODE DESIRED STATE CURRENT
STATE ERROR

12evr8hmiujjanbnu2n14dphj redis.1 redis swarm-02.example.com Running Running 45


minutes ago

bwex7fd4u5aov4naa5trcxs34 redis.2 redis swarm-01.example.com Running Running 45


minutes ago

With services that are replicated, the goal of the Docker Swarm is to ensure that there exists a
container or task running for each replica that has been specified. After the creation of the redis
service, we did specify that we should have only 2 replicas. This is an indication that although we
have a third node, there is no need for the Docker to start a new task on this new node.

Interestingly at this point, there is a service that is currently running on 2 of the available 3 Swarm
nodes. If we were in a non-Swarm environment, then the redis service would not be accessible from
the third Swarm node. However, because we are in the Swarm Mode, this will not be the case.

We now need to know what will happen once we try to connect the swarm-03 over the published port
for the redis. This is shown below:

vagrant@vagrant:~$ redis-cli -h swarm-03.example.com -p 6379


swarm-03.example.com:6379>

The most interesting thing with this is that our connection ran successfully. Although the swarm-03
isnt running any of the redis containers, the connection ran successfully. The reason this worked is

because the Docker is doing rerouting of the redis service traffic to the node worker that is running
the redis container.

In Docker, this is known as ingress load balancing. It works in such a way that all the nodes will
have to listen to the published ports. If the service is called by the external systems, the receiving
node will have to accept the traffic, then load balance it internally using the internal DNS service that
is maintained by the Docker.
This means that even after scaling the Swarm out up to 100 node workers, the end users of the redis

service will be able to connect to any node worker simply. These will then be redirected to any of the

two Docker hosts that are running service tasks or the containers.

All of the load balancing and the rerouting will be done transparently to the end users and it is done
within Swarm Cluster.
Making the Service Global

We now have a redis service that has been setup to run with 2 replicas, which is an indication that the
containers are being run on 2 of 3 nodes.

If we needed each redis service to have an instance of each node worker that could have been done

easily. We would simply have to change the number of the desired replicas for the service from 2 to
3. This means that for each node worker that is subtracted or added, we will have to adjust the
replica number.

This can also be accomplished by making the service into a global one automatically. The Global
service will then be used for creating a service that has a task running on each of the node workers
automatically. This becomes very useful in services such as Redis, which may be leveraged internally
by the other services.

To see this lets recreate the redis service as the Global Service. This is shown below:

root@swarm-01:~# docker service create --name redis --mode global --publish 6379:6379 redis
6o9m249zmsped0cmqe1guh3to

The command for creation of the Global Service is similar to the docker service create command
that was used for creation of the replicated service. The difference comes in the use of the mode
flag together with the global value.

Now that the service has been created, it is possible for us to observe how distribution of the tasks
was done by the Docker in our service. This can be achieved by executing the docker command,

together with the service ps options. This is shown below:

root@swarm-01:~# docker service ps redis


ID NAME IMAGE NODE DESIRED STATE CURRENT
STATE ERROR

15d6q7bgmyjvty9jvp2k010ul redis redis swarm-03.example.com Running Running 30


seconds ago

1xojjkqvlw7934qj6j0ca60xx \_ redis redis swarm-02.example.com Running Running 39


seconds ago

22wrdkun5f5t9lku6sbprqi1k \_ redis redis swarm-01.example.com Running Running 39


seconds ago

As shown in the above output, the creation of the service was done as a Global Service, and then a
task was started in each node worker within the Swarm Cluster.
Chapter 8- Building Docker Containers for
WordPress

It is possible to manually create Docker containers that we can use in WordPress.

MySQL Setup

Each WordPress needs a MySQL database. Begin by opening the Docker and finding the MySQL
image.

The Docker team provides us with a MySQL image that is ready for our use. Before we can execute
any commands on the terminal, you should be sure that you have read the documentation regarding the
image.

By using this image, the container can be setup as follows:

docker run --name wordpressdb -d mysql:5.7


However, you should ensure that you use the latest version of the image. In my case, I am using
version 5.7 of this image as it is the latest one. If there is no such image in your local machine, the
Docker will have to pull it from the Docker Hub. The option name is responsible for giving a
name to the container, and the d option is used for running our container in the background.
After running the docker ps command, you will realize that your wordpressdb container is not in a

running state. This container should now be running. Type the command docker logs wordpressdb
and observe the output, which should be as follows:

error: database is uninitialized and MYSQL_ROOT_PASSWORD not set

Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?

The cause of this error is the fact that a root password was not passed as an argument during the
building of the container. This is what we need to do. Remember that we created our container with
the name wordpressdb. We should first delete it because we did not add the root password. This
will be accomplished using the docker rm wordpressdb command. The reason for doing this is that
we need to use the same name for this container and it is impossible for us to have same name for
different containers.

Now that you have deleted the container, we can go ahead to recreate it. An environment variable
should be passed to the container when we are first creating it. This should be done as shown in the
following command:

docker run --name wordpressdb -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7

The -e MYSQL_ROOT_PASSWORD=password refers to an environment variable. As the


container is being built from the image, this variable will be read and the password for root user will
then be set. Note that the value for this password will be the value that you set, and in our case, this
will be password.

After checking the docker logs wordpressdb, a long message will be displayed, but you dont have

to be worried about it as this is working well. You can execute the docker ps command and you
will observe that your wordpressdb container is active and running.

Note that there are also other environment variables that can be passed to the command. Consider the

next example given below:

docker run --name wordpressdb -e MYSQL_ROOT_PASSWORD=password -e


MYSQL_DATABASE=wordpress -d mysql:5.7

If try to remove your container that is named wordpressdb, you will notice that you will fail. This
is because the container is active and running. There are two options that can help you go through this
successfully. You can choose to force remove it by passing the -f option or you can stop and delete
it. This is shown below:

docker rm -f wordpressdb

Note that in the above command, we are force removing the container by use of the f option. The
container will have to be removed from the system whether it is active or not.

When we are using the MYSQL_DATABASE, it serves to ensure that database is created with the
name we have specified. This is a good way of being sure we have a database with that name, as well
as its password. You can also create another user and assign them their own password.

To find out more about how the building of the container was done, you can look at the Dockerfile. It
uses debian wheezy and the containers are built using the bash commands. It works by pulling it from
the repository and then starting the mysqld. As the container is built from the image, the commands for
the build file will be executed during this time. During the use of the container, only the mysqld will
be executed.
Building the WordPress Container

We will be using a PHP image for this container. Although PHP comes with 3 images, we will be
using the only one that comes with Apache.

docker run --name wordpress php:5.6-apache

If we fail to use the -d option, then this will not be able to run in the background, and each output
from the container will be shown.

As you can see in the output, an IP address has been automatically assigned to the container. Note the
IP address in your case, but note that if you try to use it on the browser, you will get a forbidden error.
The reason behind this is that the folder /var/www/html has nothing; it is empty.

Note that this folder will be kept inside your container, and it will be invisible, although this will not
be for a long time. You can create a folder and then navigate to your newly created folder. Ensure that
you have removed your old WordPress container. The following command helps you create the
folder:

docker run --name wordpress -v "$PWD/":/var/www/html php:5.6-apache

The use of the -v option helps us to map to two folders. The first one of these folders will be the
one that is on the OS, while the second one will be the one on containers of the filesystem. In Linux
systems, the use of "$PWD" will give us the location we have for the terminal as we run the

command. After starting the terminal for the first time, you will be directed to your home directory.
This variable can be seen to be similar to the cd command in Windows.

In our example, we are having $PWD/ that forms our first directory and then the /var/www/html/.
The use of v options ensures that these should be full paths. However, after looking in the working
directory, you can see that it has no files. Create a new file, give it the name index.php and then add

the code given below to it:

<?php

phpinfo();

?>

You can try to check the IP address from the browser. You will notice that your IP has changed since a
new container has been created. After a new container has been create, the IP is changed. If you see a
message on your browser, then be sure that everything went as you expected.

We need to observe what will happen once we have put some wordpress files in there. You can
execute the docker stop wordpress command to stop the container from executing. Get the latest
version of wordpress and then add these files to your project folder. You can then restart the container
by executing the docker start wordpress command. You should also ensure that the files are
readable. For the case of Unix systems, you only have to run the following command:

chmod -R 777 projectfolder

Where the projectfolder is the folder in which you have stored your project, you will have granted a
read permission on this folder and the files contained in it will be readable. Once you have reloaded
the browser, you will get the following message:

Your PHP installation appears to be missing the MySQL extension which is required by
WordPress.

The default setting is that PHP image does not come with the installation of the MySQL extension
installed. However, it is possible to solve this. We are going to build some container via the
Dockerfile. You are already aware of how the Dockerfiles work. They are usually built from a base
image, some processing is done and one command is executed at the end.

Begin by making a new Dockerfile. This will be accomplished using php:5.6-apache image. We
have to use the following line:

FROM php:5.6-apache

Next, its a good time to install a mysqli extension. The following command can help us do this:

RUN docker-php-ext-install mysqli


Note that command specifies that the mysqli is a php extension. In the next step, we should execute the

apache2-foreground command. Note that our intention is only to install the MySQL extension. Here
is the command:

CMD ["apache2-foreground"]

Its now possible for us to build the images using the build files. When we use our image, we will

build the container as follows:

docker build -t phpwithmysql .

The use of the t option helps us to give a name to the repository. The dot (.) is responsible for telling
the Docker about the location of our Dockerfile. Since this file is already available in our working
directory, dot is used to tell the Docker that the file is available there. This means that the dot
represents the current working directory.

If you use the docker images command to check for the images, you will see a new image with the
latest tag. You can then go ahead and build the container together with the image, just as we did with
the php5.6-apache image. This is shown below:

docker run --name wordpress -v "$PWD/":/var/www/html phpwithmysql

You can then check your containers IP address from your browser. A welcome message will be
presented to you. If you see this message, then this is an indication that you did everything the right

way. You will then be in a position to enjoy the benefits offered by the Docker.

The next step should involve linking of the Wordpress to the database. Our aim is simply to link the
wordpress container to the database container, which is wordpressdb. This can be achieved by
simply linking two containers. Just run the command given below:

docker run --name wordpress --link wordpressdb:mysql -v "$PWD/":/var/www/html


phpwithmysql

Note that we have introduced a new option, which is link. The wordpressdb represents the name of
the container that we need to link. The second part, which is mysql represents the alias. The Docker
is responsible for modifying the host of wordpress container and for setting the IP of wordpressdb to
the mysql. This means that after filling the information for database on the WordPress configuration,
the host will be set to mysql.

You can then use your container IP on the browser. Fill in information regarding to the database
details and then login to the administrator panel. Attempt to install a new theme and you will see a
message that appears as follows:
You now might be asking yourself the reason behind this. The reason is that the user who is running
does not have the write access to the file system. This makes things a bit difficult for us. We should go
ahead and build a new version for the image phpwithmysql. Open your Dockerfile and modify it so
that it can appear as follows:

FROM php:5.6-apache

RUN docker-php-ext-install mysqli

COPY entrypoint.sh /entrypoint.sh

RUN chmod 777 /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]
CMD ["apache2-foreground"]

We are yet to create the file name entrypoint.sh, but we have to create it shortly. The COPY is
responsible for copying the entrypoint.sh to the / directory inside your container. The command

chmod 777 /entrypoint.sh is responsible for making this file executable. The ENTRYPOINT is
responsible for executing the file. In the directory that you have saved the Dockerfile, create the file
with the name entrypoint.sh. Add the following code to this file:

#!/bin/bash

chown -R www-data:www-data .

exec "$@"
This is a representation of a simplified workaround for the WordPress image (official), but we have
to ensure that we have the write access to the filesystem of the containers. The following command
will help us to build our image:

docker build -t phpwithmysql:v2 .

Note that we have used the docker command together with the build option to build our image. You
can then go ahead to remove your old containers and then create some new ones. The following
commands will help accomplish this:

docker rm -f wordpress
docker rm -f wordpressdb

Note that we have force removed the old containers using the r option. The removal of the containers
will not be determined by whether the containers are active or not, as they will just be removed from

the system. You can then run the following command to create the new container:

docker run --name wordpress --link wordpressdb:mysql -v "$PWD/":/var/www/html


phpwithmysql:v2

You can then go ahead and remove your old wp-config.php file. Note that we have linked the database
to the contained using the link option. You can then use the IP address and then check it on your
browser. It will be possible for you to install the themes and the plugins and make some changes to
the filesystem of the containers.

Some of the steps you need to do may seem complex to you. This is because there are a number of
official images for the various languages and frameworks. Each language or framework has its own

set of specifications on how it should work. The default setting in the application is to allow the
Docker to write to the filesystem. This is a good idea as it is possible for us to create a third
container that can be used for storage of the files. This is where the applications will be allowed to
write files to. The result of this is a more modular architecture. For the case of frameworks that you
are unable to change, it is possible for you to work around and get a solution.

There is usually a problem that occurs after you have stopped the wordpress container and started it.
This problem is caused by the fact that wordpress usually saves your last IP address as its Site and
Home URL. You can stop the wordpress container and then start it. After that, it will have a new IP.
If you try to use this in your browser, you will notice that the JavaScript, css and image files will not

be properly included. The solution to this problem is very simple, as you only have to open the wp-
config.php file and then add the following lines of code:

define('WP_HOME',$_SERVER['SERVER_ADDR']);
define('WP_SITEURL',$_SERVER['SERVER_ADDR']);

Note that you will have added some settings to your wp-config.php file. Since these have been
added, you will have no chance to change them later from the General Settings.

That is how you can build the containers for WordPress.


Chapter 9- Environment Variables with Docker
and Elixir

For those who are trying to run an Elixir app on a Docker container, you will get some errors
compared to running this on Ruby. Note that Elixir compiles first. The environment variables that are
used are usually hardcoded during the compile stage. This is a way of preventing you from using a
similar binary image in the staging and the production environment.

In this case, we will be using phoenix. The config/prod.exs will have the following:

# config/prod.exs
config :FooApp, FooApp.Repo,
adapter: Ecto.Adapters.MySQL,
username: System.get_env("DB_USER"),

password: System.get_env("DB_PASS"),
database: "foo_app",
hostname: System.get_env("DB_HOST"),
pool_size: 20
Note that we are using a pool size of 20. We have obtained the details regarding the database such as
the username, the password, and the database name, as well as the host in which we have the
database.
You should then release with the exrm in your Dockerfile. The following can help you:

RUN MIX_ENV=prod mix do deps.get, compile, release

After doing that, your config will have the DB_USER and other details that were available during the
building of the docker image. This is not good and you will get the following after running the image:

$ docker build -t foo .


$ docker run -e "PORT=4000" \
> -e "DB_USER=foo" \
> -e "DB_PASS=secret" \
> -e "DB_HOST=my.mysql.server" \
> foo ./rel/foo/bin/foo foreground
Exec: /rel/foo/erts-8.0/bin/erlexec -noshell -noinput +Bd -boot
/rel/foo/releases/0.0.1/foo -mode embedded -config

/rel/foo/running-config/sys.config -boot_var ERTS_LIB_DIR


/rel/foo/erts-8.0/../lib -env ERL_LIBS /rel/foo/lib -pa
/rel/foo/lib/foo-0.0.1/consolidated -args_file
/rel/foo/running-config/vm.args -- foreground
Root: /rel/foo
17:27:20.429 [info] Running Foo.Endpoint with Cowboy using
http://localhost:4000
17:27:20.435 [error] Mariaex.Protocol (#PID<0.1049.0>) failed to
connect: ** (Mariaex.Error) tcp connect: nxdomain
17:27:20.435 [error] Mariaex.Protocol (#PID<0.1048.0>) failed to

connect: ** (Mariaex.Error) tcp connect: nxdomain

...

Note that we began by building the image, and we then ran it. The image is executed verbosely, and
that is why you see output on the terminal as the image is being run. The http port to be used was also
specified.

To solve the problem, the environment variable can be specified with "${ENV_VAR}" and the
release run with the option RELX_REPLACE_OS_VARS=true.

The Ecto config block should then be configured with the following:

# config/prod.exs
config :FooApp, FooApp.Repo,

adapter: Ecto.Adapters.MySQL,
username: "${DB_USER}",
password: "${DB_PASS}",
database: "foo_app",
hostname: "${DB_HOST}",
pool_size: 20

You can then run the following commands:


$ docker build -t foo .

$ docker run -e "PORT=4000" \

> -e "DB_USER=foo" \
> -e "DB_PASS=secret" \

> -e "DB_HOST=my.mysql.server" \
> -e "RELX_REPLACE_OS_VARS=true" \
> foo ./rel/foo/bin/foo foreground

Again, after we have implemented the change, we have built and then run the image. The building
comes first. As shown in the above output, we got no errors, which shows that everything is set okay.

Our aim now is to create and then migrate the database with the mix do ecto.create quiet,
ecto.migrate. This can be done as follows:

$ docker run --link mysql \


> -e "PORT=4000" \

> -e "DB_HOST=mysql" \
> -e "DB_USER=foo" \
> -e "DB_PASS=secret" \
> -e "DB_HOST=mysql" \
> -e "MIX_ENV=prod" \
> foo mix do ecto.create --quiet, ecto.migrate
** (Mix) The database for Foo.Repo couldn't be created: ERROR 2005 (HY000): Unknown
MySQL server host '${DB_HOST}' (110)
As shown in the output, the command gave us an error. This is because the mix is not aware of how it

can replace the "${ENV_VAR}" with the environment variables we have. This is not being run via

Exrm release.

The solution to this problem is very simple. We only have to add what the System.get_env whas
started with as shown below:

# config/prod.exs
config :FooApp, FooApp.Repo,
adapter: Ecto.Adapters.MySQL,
username: System.get_env("DB_USER") || "${DB_USER}",
password: System.get_env("DB_PASS") || "${DB_PASS}",
database: "foo_app",
hostname: System.get_env("DB_HOST") || "${DB_HOST}",
pool_size: 20

With this, we will have two options: the setting can be done during the compile time in our mix
script, or we fail to specify them. In the latter case, once the release is run, it will make use of runtime
environment variables. With Docker, Phoenix, Elixir, and Exrm, you can build a release and then run
a similar binary in the staging and production.
Conclusion

We have come to the end of this book. You should now be able to see how helpful the Docker is. You
can build Docker containers and use them with WordPress. Note that when you need to use a
particular Docker image, it is first checked from your local computer and if it is not found, the
computer proceeds to download it. This process is known as pulling, and the image is pulled from a
repository.

You might also like