You are on page 1of 23

nschoe's labs - Docker: Taming the Beast - Part II

Page 1 of 23

Introduction
Installing Docker
Max OS X
Windows
Docker Fundamentals
Images vs.Containers
Docker Networks
Making Data Persistent
Interacting With Images
Listing Images
Pulling and Pushing Images
Keeping a Clean System
Interacting With Containers
Creating a Container
Deleting a Container
Starting And Stopping Containers
Getting a Listing of Containers
Copying Files In/From Containers
Running a Command Inside a Container
Monitoring Application Inside a Container

second part of the Docker: Taming the Beast series. In Part I we have talked about the core principles behind docker: what is go

s sharpened our intuition, and now we are ready to go in: lets play with Docker!
his post, we will briefly see how to install Docker, and then we will focus on what I call the Docker Fundamentals. These will be the Doc
cepts (and commands!) that you will use on a daily basis.

Be warned: this part is dense, and long. Here, everything will be new (depending, of course, on your level) and everything wil
important. I suggest you take time to practice the examples and spend time on each section and new concept, until you f
understand it. It will come bite you you-know-where later if you try to take shortcuts.
Now, lets get ready and see how to install Docker!

ou should recall from Part I, Docker runs on the Linux kernel, so it needs the Linux kernel. Hence, it should be installed in Linux.
w what do you do when you are running Windows or Mac OS X? Well, this is simple: you create a Virtual Machine running Linux and you ins

Well yes, but no. Basically this is what the Windows and Mac OS X installers do when you install Docker on them. Only the Docker guys h
omated the process (so you dont need to manually create the VM, and install Linux and install Docker and launch it.) Additionally, they take c
lot of other stuff, which deal with network, sharing data, etc. These concepts are covered below.

of this is to say that currently, the safest and easiest way of running Docker is from a plain-old Linux. So do that if you can and chose
dows / Mac OS X install if you really have to. Full disclosure: I run under Linux and have never used Docker under Windows or Mac OS X.

w, before we really begin, let me say a few word about which Linux distribution is the best for docker. Newcomers on the #docker
me ans ask this question: whats the best Linux distribution for docker?.
ually the answer is: it doesnt matter. In fact, docker was designed not to care. As you will learn in this article, docker is all about creatin
trolled and reproducible environment, precisely to be independent from the hosts, so the answer is: it doesnt matter.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 2 of 23

one thing however, that should be considered: the version of the docker-engine software. Docker is fairly new and thus moving rapi
me distributions have a stricter policy for upgrading packages and very often, the default package provided by your distribution is outdated. So
really install the latest release of docker, grabbed from their site.

installation on Linux is fairly easy, especially on the most popular distributions. The docker documentation page covers the installation. Basic
oils down to installing the packages fetched from the docker site, and installing the necessary driver for storage.

default, docker needs to be run as root, and every subsequent commands too. Since this is a pain,the docker group is available, just make y
r a member of this group with usermod -aG docker <user> (run as root), log out and log back in to make it effective and you are good to go.

Docker for Mac project has been moving a lot lately, it is changing very often. Im not using Docker for Mac, so it would not make much se
me to tell you how to install, for now, as far as I know, the updated doc is here, so I suggest you follow that link.

the Windows users, there is a project in beta (same as Docker for Mac), I have not been following it because I dont develop on Windows, so y
to follow this and hope for the best.

derstand that Docker runs on the Linux kernel, so the Docker for Mac and Docker for Windows are ambitious projects, still in beta phase.

y, that was the boring part; but now that we have Docker installed on our system, its time we talked about it!
ts the part where you should start paying real attention :-)

first thing we need to understand is that Docker follows a daemon / client model. Every command we will call makes use of the docker cli
t docker client sends the command to the docker daemon, and its the one doing the hard work.

hy? you should be wondering. Well in order to understand why, lets examine a slightly bigger picture, and see how the client and the daem

way the docker client and daemon exchange messages is through a socket , classic. Most of the timethis is the case if you have just insta
ker and did not do any fancy configurationthey communicate through a Unix socket or file descriptor.

w something interesting should begin to pop in your mind (dont worry if its not), since the client and the daemon communicate throug
ket, it should be possible to separate them physically. And indeed, this is a huge deal in serious Docker setups: you can configure your doc
mon to listen to a TCP socket on your server, and you can talk to this remote with your local docker client.

fused? Let me say it again: suppose you have a remote server, hosted on the cloudseems to be the cool word these days. You install docker
remote server and you configure the docker daemon so that it listens to a TCP socket (an open port on your server and firewall).
w on your laptop or local computer, you also install docker, and you configure the docker client to send its commands to the remote ho
dress, port) pair. In this fashion, you can administrate the Docker on your server from the command-line on your local computer. This come
y handy to administrate several remote servers.

w that we have understood the Docker daemon / client model, lets talk about the most confused concept of Docker (in my opinion). This is
c that is at a vast majority of problems beginners have.

m talking about those three magic words: images,


images containers and Dockerfiles.
Dockerfiles
at are those, and whats the problem with them?

s whole thing is actually fairly easy to understand, you just need to pay attention and make sure you understand and not have that feelin
ah yeah I kinda see the story here.

w that I have warmed you up, here it is:

ts it. Seems like nothing, but here it is again: Dockerfiles are used to build images, not containers.

y with me, Im aware it doesnt seem like a big deal right now, and it (probably) doesnt make much sense.
s talk about all of that in details.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 3 of 23

first, top-most things that Docker is concerned about are images.


images If you come from Object-Oriented Programming (OOP), you can think o
ge as a class. The same way a class in OOP is nothing in itself, a Docker image doesnt do anything; it is just built and stored. Its nothing m
n a model for your containers.

is simply the recipe: it describes how to build the image. Its no chance that Dockerfile sounds much like Makefile, because th
ly what this is: a Docker - Makefile. And as we will see later in the article, build is the command we use to build an image.

I know that this all sounds trivial and you dont think you are learning much right now or that what Im talking about is a big deal,
even though you do not realize it yet, I am introducing crucial vocabulary right at this moment.
I suggest you scroll back a few paragraphs, and read again, pay attention to every vocabulary word I used. Trust me, this will m
your life much easier, especially when you will be asking for help in forum, Stack Overflow or IRC.
When I used build, this is what I meant, not create, start or something else. Likewise, I will use create, start, run later, th
specific meaning, so make sure you notice them :-)

s get back to our OOP analogy!

w what do you do when you have a class in OOP? You create instances of that class, which are called objects. Well in Docker, this is
: from an image, you create containers.
containers

ou are not familiar with OOP, consider it like this: what do you do from a model, a set of blueprints? You replicate it, you create objects based
blueprints. Well this is what containers are to images.

ou think you understand, good! Take a break (really, get the hell out of that screen for a while) and when you come back, scroll back a screen
d all that part about containers and images again. Really, you have to, because it seems so simple, that I know for a fact, that you want to
go on. But you will end up coming on IRC and ask Can I do XXX in my Dockerfile? and I will most likely answer This does not make sense.

then I will probably tell you thisso I decided to write it here, so it serves as another layer of explanations-what all of this means is that we
ing about two different,
different separate,
separate unrelated times here.
re is the time of Dockerfiles and images: this is the build time. And there is the time of the containers: this is the run time.

y? Because an image is built while a container is runsee what I meant about using specific vocabulary?.

s see a concrete example and play with images and containers. For now, we will not build our own imagewe will cover is later, though. We
the official Ubuntu image for now.
s download it first: docker pull ubuntu .

hey hey! This was our first docker command! This is great. If this was not obvious, docker pull is used to fetch (pull) an image from
ere? By default, when you dont specify anythinglike we just didit pulls from the Docker Hub. This is a big repository hosted by Docker wh
ens of people can push their newly-built images. You will be able to do it too, after reading the articles.

small, But Crucial Parenthese


ice something interesting in the output of your terminal: there are some talk about pulling layers. This should ring a bell: from Part I

l, the guys at Docker are very smart and they did not design images as a big blobs. Instead an image is comprised of stacked layers.

hats the big deal? you might be wondering. Well I should have said: an image is comprised of reusable,
reusable stacked layers. This is much m
resting indeed: it means that layers in images are shared and reused when possible.

pose you build a 500 Mb image for your development environment. Now you need a slight variation of the environment to try a different vers
particular software: this new software is 1Mb in size. So you create the same image, but add this 1Mb file in it. Well if images where dumb blo
would now have two images in your computer, and a total used disk spaced of 500 Mb + 501 Mb = 1.001 Gb. This is costly.
safe to say that your little 1 Mb file cost you 501 Mb of disk space.

because the docker guys are smart and designed images as reusable layers, when you build your image, docker sees that most of it (the f
Mb) are already present in the system, so it only adds the 1 Mb layer.
end up with two images, but 500 Mb + 1 Mb of disk used. Pretty smart!

way, before we head back to our experiment, just remember that when you pull an image, you are actually pulling all the layers that make up

we have just pulled the Ubuntu image, is it running yet? Try to answer before reading on.

o possibilities from here:

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 4 of 23

you thought either yes I think this is running or no I dont think it is running yet
you scratched your head (and possibly started insulting me)

l if you in the second category, I congratulate you!,


you! and even more if you thought about insulting me because I was talking nonsense (did
thought I had slipped here?!). If you are, however, in the first category, I highly suggest you scroll back two or three screens and s
Images vs.Containers again.
talking nonsense: what we have just pulled is an image, it is inert, it serves as a model. Am image doesnt run, doesnt execute.
e just acquired blueprints; we havent built anything with them yet.

w its time to create a container from this image. Do docker run ubuntu .

l that was disappointing What happened here?

s of things happened actually: when we run docker run ubuntu we instructed docker to create and start a container from the image
did is check if it had that said image.
ur case, it did, since we had just pulled it moments before.
hen it created a container from this image.

mean and how can you verify that?

s start with the latter question, as it is the simplest. In docker, we use docker ps to list the containers. go ahead and do it.

hat? Are you kidding me? It says here that there is nothing!

ow, I forgot to tell you that docker ps lists the running containers. If you want to list all containers (so running and non-running containe
docker ps -a , the -a is, as in many Linux commands, short for all. Now go ahead do it, you should see your container.
docker ps suspiciously looks like ps , which a Linux command listing processes. And once again, this is not by chance, and Ill use
note as an opportunity to tell you my favorite Docker sentence:

so important, so fundamental that I suggest you actually write it down to a post-it and stick it on your computers screen. For the first
nths when I used Docker, I literally had that post-it snapped on my screen. For real.
usually say on IRC, 90% of the time, when you [as a beginner] have a problem with Docker, read that sentence again, and it should solve it

k to our container, why did we have to use docker ps -a which lists all containers (so in our case our stopped container) to see it?
l its because it is stopped, really. Now the real question is: why is our container stopped, then?

d that, my friend, is the right question (bonus point for those who caught the reference). The truth is: I have already answered this question,
lly indirectly. And you need to be very good to find that out.

swered it when I said that a container is just a process. Lets look again at how we started our container: docker run ubuntu . Decomposing it,

, this is the docker client, ok.


is the docker command we use to instantiate a container from an image and run a command inside.
, this is the name of the image from which we instantiate the container.

where is our command / process? There is none! Indeed, this is why our container is stopped: since we did not specify anything to run i
it did not run anything.

y I sense that some of you might get confused at that time, and this is perfect, because it allows me to revisit something we have talked ab

reason you might be confused right now is because your intuition makes you consider docker containers the same way you consider vir

en you have a virtual machine, you start the supervisor (e.g. Virtual Box), select your machine and click start. Then your virtual machine bo
starts and then you are ready to use it: youre inside.

ker containers are not like that, at all.


all This is what I was talking about with the post-it, go read it again. In my very early docker days, some
RC told me a slightly modified version of my favorite docker sentence: A container is just a fancy way of running a process. And I think
tures the idea very well: do not overthink containers. Containers are just a slightly different (improved one may well say) way of runnin

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 5 of 23

cess. When you run a process inside a docker container, its the exact same thing as running the process outside, only you add some isolat
everything we saw in the first post.

docker run ubuntu , we created and started a process from the ubuntu image, but we did not actually specify which program/process
ted to run; so it doesnt run anything. It is as if, on your terminal you typed exec . Okay, you wanted to run a program but which one?! Its
e thing happening now.

s allows me to highlight another specificity of docker containers: they stop when their PID 1 process terminates. This is important: when
cess you are running inside your container exits/returns/terminates/crashes, the docker containers stops, because it has nothing to

e it is, I hope I was crystal-clear about the differences between images and containers, because that is arguable the most important par
ker for a newcomer. If you have any hesitation, do not pursue further: take a break, re-read this part and shoot me an email if you have to.

I have been using the class / object analogy quite a lot, but this is just a simple analogy to give you the feeling. It doesnt mean t
they are implemented in terms of class and objects. Besides, dont push the analogy further than what I intended: do not look
concepts of heritage, polymophism, etc.
It should be obvious, but one is never careful enough and I just wanted to highlight that.

y, this part, some people might tell you that it is not a docker fundamental and therefore should not be in it. In essence, I think theyre right,
are: docker has a History and with History comes reflexes. Some of these reflexes are now deprecated and bad, people should not d

still do it, and I see people coming on IRC asking about a problem that clearly shows they lack updated knowledge about doc

des, docker networks can solve some tough problems very easily and thus Ive decided to include it in the fundamentals.

t being said, I will only describe and show some very basic examples here, sort of like an introduction. After all, the goal is not to make comp
ks in this part, it is to introduce you to the fundamentals, meaning the thing you will use all the time.
n I will make more advanced articles focusing on networks.

ocker Networks: Why and What?


what is this docker network thing?
s consider what we have learned about docker right now: it can execute processes in an isolated manner, meaning it hides the filesystem and

ts good, but very limiting: complete isolation doesnt really serve us; how do we make our processes communicate with each other? Here co
docker networks.

pose we are running a basic webserver infrastructure, comprised of:


some static HTML files we want to serve
an nginx acting as a Web Server
a postgreSQL database to store data

me notes about this:

First, its an example, so if youre more used to apache instead of nginx and/or mySQL instead of postgreSQL, you can swap those term
remains valid!
Seconds, its an example(!), so the accute reader might wonder what we can possibly store in our database if we only have a static website. T
is true, but I dont want to introduce another difficulty by talking about PHP or another backend. Lets keep this simple and imagine that
fetch data from the database.

y so whats the deal?


ker is about containerization, isolation, and so we will containerize! The principle is to start container#1 that will run the nginx instance,
tainer #2 that will run the postgreSQL instance.

n this setup, we would have containerized / separated the instances. If hypothetically, if the nginx server is compromised or crashes, it will
ct the database, which is pretty awesome.

problem in this setup is: the nginx instance has no way to know about (let alone contact) the SQL instance, running completely isolated.
t to break that isolation in a controlled manner: this is what docker networks are for.

principle is very simple: we will create a private, isolated network of which the nginx and postgreSQL instances will be a part. Since they wil
he same virtual network, they can see and talk to each other; and since this is a private network, other containers will not be able to see th

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 6 of 23

s is very handy.
can think about it as a VLAN between the two containers :-)

ow is This Done?
Im glad you asked, because its important to have a least a notion of the things going on.
ont describe down the very low-level, but enough so that you can have a pretty solid intuition about docker networks.

en you installed docker, if you were curious and looked at your network interfaces (id you did not, do it now), you should have found a surpr
ip addr show (if you are modern) or ifconfig (if you are an outdated caveman).

should see a new network interface named docker0 . Its like your had plugged in another network card, only this is a virtual network interf
dled by the kernel.
s interface is said to be a bridge (this is the official term). Why? Because it acts as a bridge between several interfaces. At the moment, it doe
ge anything, but it will eventually.

d simply, bridges are ways of grouping several network interfaces in one. What this will do is group all interfaces in our docker containers
: this is how our containers can have Internet access!
docker network ls

(this is a new command by the way!). You should normally see 3 networks:

network is when you want to explicitly disable networking for a container. In this case, it will specifically not use the bridge.
network is a bit special. What this does is un-containerize the network part of the container. Basically, it makes all networks interface
host available to the container directly. You are not likely to use it, unless you have a very specific use-case.
is the default private network every container joins when you dont specify another network. There is a specificity however that
revisit in a later article, but Ill say it here anyway: even though they are part of the bridge network by default, the containers inside it are neit
discoverable by their name; which means you need to use their IP to contact them.
ont say more for now because that implies some other things and I will receive that for the article about networks.

k to our network interfaces. The idea is that when you create a container that is part of this bridge network, you will create yet another
eth0 ! For it, this is a normal, Ethernet-based network interface
m the hosts point of view, it will be a virtual Ethernet interface and will have a name something like veth-xxx . The veth part if of course
tual Ethernet, and the xxx is a unique name.

work interface. From inside the container, this network interface will be named

ce I said that every container you createand for which you do not specify a custom settingwill join the default bridge interface, it is not v

s is why we will actually create our own private, isolated bridge-based network and will use that network for our setting.
t was a lot of words, lets check their meaning one at a time, to be sure we dont have any obscure part:

private: means that we will explicitly choose which containers join it and thus wont be polluted by other containers
isolated: means that the containers which are not part of the network wont be able to see, reach or communicate with our containers insid
bridge-based: this is tiny bit more complex to explain right now, but it has to do with the fact that there is a fourth docker network type t
we have not talked about yet: overlay and this is to highlight that its not one of this. Overlay is by far the most interesting docker network,
definitely talk about it in the article about networks!

ough this doesnt have any crucial implications, make sure you understand the difference between private and isolated. The former sta
containers wont implicitly join the network, the latter states that containers outside and inside the network can neither talk to nor see e

Bit of Practice
wont be entirely satisfied nowI know itbecause I will only show you administration commands and not how to use networks. But thi
ause we havent yet see the basics of containers manipulation. So be a little more patient, keep reading and you will be pleased later!

you might have guessed, docker network is the command we will use to interact with docker networks. docker network --help will give you a lis
commands it acceptsthis is also true for every docker command.

can list the networks that we have created with docker network ls . As always with docker, you will see both a Name and ID column. The nam
name you give when you create a network. But docker uniquely identifies objects (whether its networks, images, containers or volumesI
me back on volumes later, dont worry) by IDs, so you can always replace names with IDs in your command. Unless in the create command
rse: you cannot choose the ID.

can create a network with docker network create my-private-network . Again, some help on the subcommand with docker networks create --help
den, I should, once again, congratulate the docker guys for their very nice documentation.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 7 of 23

y, so now that we created our network, you can check it with docker network ls again. As you see, there is a third column, driver which states
eror typeof the network. As I said before, without any additional options, new networks are created with the bridge driver.

at can we do next with networks? Delete them of course. Easy: docker network rm my-private-network . Easy.

o important notes:
notes
You can delete the networks you created, but its not possible to delete the three default networks: none , host and bridge .
you cannot delete a network if there are containers connected to it, you have to disconnect them first.
how do I do the second point?
docker network connect my-private-network my-container

and docker network disconnect my-private-network my-container . Mind the order: first the network, t

very important and not everybody gets it, partly because people tend to skip the basics and want to use docker as fast as poss
partly because this is relatively new, even in docker (there was another way of doing this, but its now deprecated; at least it should be). So I
hange that and make it crystal clear.

in, this is an article about fundamentals, so I will mainly talk about and explain the possibilities, but not show you a full-blown example just y
w this is frustrating, but docker being very powerful, you can do a lot of things with it. And I have material to make a complete article for ev
ect I introduce here.
s is what Id like to do, and not a very sparse, incomplete little paragraph.

here we go, whats the problem?


part I where I was talking about layers, remember?

ecap the idea phrasing it in another way, which should help clarify any remaining doubts. This is highly related to Images vs.Containers

ats an image? We saw that an image was a custom environment that we created (or other people created) like we want it to be:
essary tools, softwares and libraries, define custom environment variable, create custom directory architecture, etc. This is a Linux environm
ped like we need it to be.

m such an image, we instantiateor createcontainers which are running instances of this imagethink objects instantiated from class in OOP

we already saw that images were not stored as big chunks of obscure, binary data, but rather as layers, small, potentially-reusable part
ges, which stack up a little like git diff s. This allows for smart re-usability and space saving.

ght, but we never really talked about what a container actually was. How do you make a running instance of an image? The answer is v
ple and uses the same idea that images do: they use layers.

e is how it happens: when you have an image, which is composed of several layers (three in the example below), they are stacked like this:

-----------------------------------LAYERS #3
|
-----------------------------------LAYERS #2
|
-----------------------------------LAYERS #1
|
------------------------------------

second layer is based on layer 1 and makes some changes, then layer 3 is based on layer 2 and make some other changes. Alright, that is

hese layers are read-only because together they make an immutable image. You cant write or modify any of these layers.

w witness the magic behind creating a container out of this image:

-----------------------------------RW LAYER
|
-----------------------------------LAYERS #3
|
-----------------------------------LAYERS #2
|
-----------------------------------LAYERS #1
|
------------------------------------

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 8 of 23

that smart: a container is simply an additional, read-write layer on top of the images layers. So yes, a container is based on
modification you make (write a new file, modify an existing file, add user, remove files, create a new user, etc.) goes into a separ
d-write layer. This is very smart because once again: you share the space again. If you have an 500MB image right now and instantiate 2,
tainers of it, if you dont write data or modify anything, your disk space has not changed!
s is because each image use the same read-only images layers, and currently their read-write layersalso called the containers layeris empt

how it compares to copying 2, 000 500Mb ? This is awesome. Take some time to appreciate the beauty of it.

always keep that in mind: whenever you create or modify data in a container, it goes into this layers, read-write layer. By the way, just
inder, all of this is rendered possible by the union filesystem.

o You Were Talking About a Problem?


yes, Im coming to it!

you create a container (we briefly saw that it was done with docker run <image-name> in a previous part, but well come back to it in more det
r), then you write some data in it: touch test.txt at the root / .
will give you several directoriesthe typical ones you find at the root of a Linuxs system, etc/ , home/ , usr/ , etc.and test.txt
ou followed, you know that test.txt is in the containers layer (note that from now one, when I use containers layer it means the top-m
tainer, which is read-write and sits on top of the images read-only containers).

w I hope you have not forgotten that

so there must be a process running in the container (otherwise it would quit). For our particular example, it doesnt matter and has actu
hing to do with making data persistent, but lets always keep in mind that a container must have a process running inside, so lets say, for the s
nginx HTTP server is running ans forget about it.

w suppose you stop the containereither because you issued docker stop <container> from the outside or because your nginx crashed for so
son. So the container is stopped: its not a running process on your host anymore.

l, lets start it back with docker start <container> , if all is okay, this container should start. And if you run ls at the root, what are you suppose

same as moments ago: the structure and the test.txt file. I am insisting on this point because in my early docker days, some people told me
trary and it misled mein other words, they told me that once I restart my container, my data inside it will be lost; this is wrong

y is it the same, then? In other words, why is our file/data still here?

because we only stopped the container: we just made it non-running. The process inside the container was killed (either gracefully or bruta
in our host, the containers layer is still present, it still has data in it.

et me say it once and for all:

thats perfectly logic afterall. Dont let anybody talk you out of it: you can have 10GB worth of data in your container, you can stop and/or k
number of times you want, when you start it back up, the data will still be there.
ause the containers read-write layer will still exist and be used by your container.

hats the fuss about making data persistent then?

l, here is the problem: docker was made so thatand you should always have that in mind as well

at does this means? It means that at any moment, you should be able to destroy a containerI did not say stop, I said destroyand recre
container, with little to no consequences.

saw that the hard and heavy step was building the image through the Dockerfile, but once that image is built and stored in your hosts hard dr
nstantaneous to create a container from it; or a thousand. Theres a reason this is so fast: the implementation reason is because its just
logical reason is because containers areand should beonly processes. Its a utility, its a software.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 9 of 23

at happens when you destroy a containerpay attention now, we will often use stop and destroy containers, this is not the same thing

troying a container is easy: first you need to stopor killit. If you try to destroy a running container, docker will insult you, telling you its
e to kill running container, its like killing little puffy kittens, you know?
stop it before. Once your container is stoppeddata is still there at this point, rememberdestroying it is just a matter of deleting its conta

the data in it is gone: it was in the containers read-write layer, and we have just deleted it!

he Solution(s)
now you should be confused: how can we keep data? Because honestly, you cannot think this is a limitation of docker!
fact that containers should be able to be destroyed and recreated on demand seems to contradict that data is stored in the container layer.

member that pretty much everything that has to do with files in docker is implemented with a union filesystem (unionFS)? Well this is both v
d for all the advantages weve seen so far, but this is where it begins posing problems. So what wed like is the ability to bypass that unionFS,
ker gives you three ways to do that.

w thats another area where some people and I disagree on IRCnot so many, dont worryDocker provides you with three ways to bypass
onFS, and they are meant for three different use cases, they are not equivalent!

er you are in use case #1 and use solution #1, either you are in use case #2 and use solution #2 or you are in use case #3 and thus use solu
But you have to know.
e that the three solutions are compatible, so you can be in use cases #1 and and #2 and use both solutionsor any mix for that matters!.

e are the three use cases:

you need to make data persistent and by that I mean that you need to keep your data even after your container is destroyed and you want to
be able to recreate a container that can use this data
share data with your host and by that I mean that you need your container(s) to access files or directories that are on your host
and/or vice-versaremember that the unionFS shadows the hosts filesystem and was designed specifically to prevent you from doing this.
share data between containers and by that I mean that you need several containers to be able to read/modify the same data,
without the host having anything to do with that.

need to carefully consider these 3 use cases, and understand how they are different, and when the time comes, know which one(s) you are in

ll now describe the solutions, and give you a basic example of when you are in this use case.

me begin with the solution #2, because its the easiest one.

haring a Hosts Directory


pose you are building an image to test your code: so that you can compile, execute and test your code in a contained, controlled way. You h
/home/nschoe/workspace (Im talking about source files, like *.c , .hs , *.js files, etc.) And you want to compile and test it.
en we see Dockerfiles later, you might be tempted to simply COPY the /home/nschoe/workspace directory inside your image, so that each time
ate a container from that image, you have a new copy of the code. This might work indeed, but the problem here is that if you test the code ins
container, find out theres a bug and change it on your host, then you have to rebuild the image, because the code changed. This is v
ficient because remember that building an image is the hard and long part.

the good solution is to build your image with all necessary tools ( gcc , ld , valgrind , ghc , nodejs , etc) to build and execute your code and t
/home/nschoe/workspace directory with the container.
e that its different than having a copy of it, Im talking about sharing (i.e. giving them both [the host and the container] access to the

ocker, this is called mounting the directory. And you can think of it exactly like mounting a partition. The idea is that you will
/home/nschoe/workspace into a mountpoint inside the container, for instance /app/code . You can chose the same mountpointthus mountin
home/nschoe/workspace inside the container, but you dont have to.

do that, its the -v option of docker run . Something like this: docker run -v /home/nschoe/workspace:/app/codee <image-name> . As you have guessed,
-v /path/on/host:/mount/point/on/container .

rom now one, both the host and the container share the same data:
for the host this data is located in /home/nschoe/workspace
for the container, the data is located in /app/code

with this, because as they share the data, any modifications from one will affect the other. We are bypassing the protective un
ystem. In our case, this is very handy: if we found a bug when compiling or testing our code in the container, we can have our favorite ed

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 10 of 23

ned, modify the file /home/nschoe/workspace/Main.hs for instanceand it will be immediately changed in the container; no need to restart it, or
thing: it-is-the-same-data!

this can be very dangerous too: what if we delete a fileor the whole directory !from inside the container? Well congratulations, youve just
r entire code!

this reasons, you canand I strongly encourage you to do somake the mount read-only, like such: docker run -v /home/nschoe/workspace:/code/ap
. Mind the :ro for read-only. In that case, it all works exactly the same: when you modify files on the host, the change will be effec
mediately in the container, but your container cannot modify/delete/erase anything in his /code/app directory.
nk of it as mounting a cdrom (yes that existed, okay!) or a protected usb key: you can see and access the content, but not modify it.

ring code with your container for execution is one common use case, sharing /etc/localtime with your container is also a common use case, to
time. But you may find some others!

amed Volumes
or solution #1, weve talked about keeping data persistent with a database storage. Lets use postgreSQL. Typically, when you have a softw
needs to talk to a database, you will use at least two containers.
can take the example of a webserver (e.g. nginx) which talks to a PostgreSQL database. Lets focus on the PostgreSQL instance. Its a softwar
L database server) which stores some data (the database data). That data better be persistent because thats PostgreSQLs job!

said like that, it seems to break the assumption that containers should be able to be destroyed and recreated at any moment. To fix that, we
arate the process running the psql server and the actual location of the data. Well use a Named Volume.
almost the same as before, with one minor difference: rather than using a path in the first part of the mount command, you speci
me, so you would do: docker run -v my-website-data:/path/on/container <image-name> .

at the above command does it create an instance of image <image-name> and mounts a Named Volume at location /path/on/container
tainer. The analogy with a partition is event more valid in this case: in this case, every data written on /path/on/container inside the container
ass the unionFS and be carried out in a special directory on the host. When you destroy the container, the data will be safely kept somewhere
host, and the Named Volume my-website-data will still exist with its data in it. And you will be able to create another container that uses

s is very handy, if you want to update your container for instance. Lets suppose youd like to activate a new option in PostgreSQL file, change
conf file, or update the PostgreSQL version; you will need to destroy that container, update the Dockerfile (or use a newly-crea
tgreSQL image from the docker hub) and then recreate this container, instructing it to use this Named Volume so the data gets automatic

t that awesome?
need to precisely understand the difference between sharing a hosts directory and using a Named Volume.

re is nothing magic about all of this, when using a Named Volume mounted to a mount point inside a container, all it does is instruct doc
to use the union filesystem when writing to the mount point, but rather to directly write on disk. Even if you dont need to know, I will tell
ere those Named Volumes are stored in your hosts directory, because I hate when there are some magical things happening.

your Named Volumes are stored in: /var/lib/docker/volumes by default. So there should be a /var/lib/docker/volumes/my-website-data , this directory
ry Named Volumes directorycontains exactly one folder, name _data which contains the data in the mountpoint.

very careful here: I told you about this location so that you understand whats going onmaking data persistent is not
ing data in a known location and bypassing the unionFSbut you should never,
never ever edit, modify, write, delete from the fold
var/lib/docker/volumes . Never. Treat that as black boxes, well actually gray boxes since Ive explained to you how this works.

ou have been playing with docker more than this article does, you might notice some weird, very long names in /var/lib/docker/volumes
volumes too. (Spoiler for other articles: you can see all your Named Volumes with docker volume ls ; mind that its volume (singular) and not
ral), this is weird and inconsistent with docker images , but its like this).

se weirdly-named volumes are called Anonymous Named Volumes (which is a weird concept ^^). It happens when you ask a container to
ate) a Named Volume, but dont specify the name, something like docker run -v /path/on/container , here you see that there are neither a path n
me before the mountpoint. In this case, the docker daemon generates a unique hash and use that a the name for this volume.
exactly as if you had done docker run -v 792e7d8e336b133e1675b24c0ead99605e62a98ad30fdd107200b5be3c9db3658:/path/to/container <image-name> , only you did
792e7d8e336b133e1675b24c0ead99605e62a98ad30fdd107200b5be3c9db3658 .

haring Data Between Containers


time to think about solution #3: sharing data between containers. What could be a use case for this and how does it work? Its very simple.

pose we are inside a company that produces code. We need a way to centralize this code and do versionning, well use git. We want a
ository, similar to github, but private, internal, something like gitlab. Dont overthinkg this if you dont know gitlab, its not where the interes
t is, for the sake of it, gitlab = github on local network.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 11 of 23

he developers in your company create some code, and upload it to your gitlab server when they use git push . Now, the interesting part: si
are a very concerned developer leader and care a lot about your code being robust, you want automatic testing done when people upload c
our gitlab account.
he workflow is as such: a developer uploads some code with git push , its stored in your gitlab, then automatically you perform some tests on
e, and depending on the success or error, you either accept the push or reject it, possibly sending this developer an insult-filled email.

ts the workflow, now the pseudo-implementation. Since youve understood the concept and importance of Named Volumes, you use a Nam
ume to store the actual code in your gitlab container, so that you can safely update gitlab when a new version comes out. Lets say, for the sak
example, that the code repository is stored in /var/gitlab/code . What you would do is create the container with the option
:/var/gitlab/code . Now youre safe.

at about the automatic testing of the code, we could make is run in the same container, but we are going to use another docker container, in
text, it makes more sense. So the principle is to run another container, that will read the newly-pushed code, test it, and send an email to
eloper when he pushed some broken code.

w can this container access the code, because its currently stored in a Named Volume but should work in harmony with the container runn
gitlab server? One possibility is to run it with -v gitlab-code:/path/in/container . But thats not what we are going to do, for several reasons: first i
n to write the same command twice (in this case, option), and second, suppose the gitlab server uses 10 Named Volumes (maybe one Nam
ume per project), then wed have to re-write the 10 -v options!

way! And semantically, it would make more sense to say I want to use the same volume as this other container. And thats exactly what doc
ws us to do: we will run our second container with the --volumes-from <first-container-name> option.

rather verbose: it means to use the same volumes as the specified container, at the same mount points. So in our second containerthe one
use to make code testingwe will have the available code at path /path/in/container . If the first container has 10 Named Volumes, at 10 differ
unt points, we can re-use all of them, with one, simple option: --volumes-from . Awesome!

he Plague of So-Called Data-Only Containers


s stop playing now; and lets start being serious for a moment. Im about to talk about something that I dont want to talk about. Its a plagu

ant to be extra clear on that: I dont want to talk about Data Only Containers, because it should not exit anymore. Its a concept about the p
ancient Docker times (which in actual time is still very close ^^). Its like talking about floppy disks in 2016, do you get the idea? The concep
valid (floppy disks still work and still store data), but you would never use it today, would you?

ve hesitated a long time before deciding whether or not to write about them, but its still heavily talked about, still documented on websites,
still used by people (and not only beginners).
ve decided that it was best to tell you about it so you can not use them, rather than hiding it from you, leaving the chance that you will use it,
wing how bad it is.

me be clear: Im talking about them because you might be using them, and in this case Id like you to understand how much and why you
ng, you may have heard the term and be wondering whether to use it or not, or you might simply be curious and want to know about them.

here I am, Ill make a small paragraph about them, but I want you to forget all about it immediately afterward (forget as in never
p-it-in-mind-so-that-you-know-what-not-to-do).

a-only containers is what people used to make data persistent before the Named Volumes API were functional. The idea is simple, albeit a
sted: if you are designing a custom image, you write your Dockerfile, put the instructions you want to build an image from it and use a
ement. That statement will create an Anonymous Named Volume (which is a Named Volume with a hash for a name) containing the data
t to have persistentwhen you create a container from it.

is to create a container from that image (read: from the image built from this Dockerfile) and do nothing with it: dont specify
cess running into it. Since you did not specify a process to run, the container will simply stoplike weve seen before. Yes but theres a ca
en you started it, it did create its Anonymous Named Volume. And the idea behind all of this is that now, to create your real containerthe
will have running, you will use the --volumes-from option.
ng this will make it so that the same volume will be used for running your container. When/if you destroy and recreate this container, provi
re-run it with this option, your data will be safe.

eally, this idea is bloated, I perfectly understand that it was needed at some point, but now that we have proper Named Volumes, this should
ned, so please, dont do this.

y, so that was a pretty big part, but the Docker Fundamentals that we have learned here are very important, even if they sound boring
tract: I have laid some bricks in your mind that we will reuse in more in-depth articles and I did my job correctlyand you read carefully, this

should probably take a break now, because you need to make a context switch in your brain to let it process what you have learned so fa
w this sounds like a pain in the butt, but knowing when to take a break is an integral part of learning.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 12 of 23

now time to take a little step forward and see slightly more concrete examples, in this part, we will learn how to interact with images, so we
some docker commands (yay!).

we know by now, an image acts as a model or base instance from which we can instantiateor createcontainers. I know Ive said this sev
es already but lets see it again:

build time:
time its were the image liesand its Dockerfile. This is considered the heavy part, where it takes some time to build.
run time:
time its the container, it is created from an image and creating a container is instantaneous: it can beand actually isdone

his part, we will focus on the images.

what can we do with images on our system, what are some useful docker commands?

l, first, its good to know what images we have on our host, we can list them with:

images

, plural; docker lacks some uniformity here, remember it was

docker volume

, singular)

output is pretty straightforward, on my machine it currently gives:

SITORY
ta/ubuntu-systemd
x

TAG
latest
latest

IMAGE ID
58676da6fce1
0d409d33b27e

CREATED
2 weeks ago
3 weeks ago

SIZE
122 MB
182.7 MB

e you can find the image name (column REPOSITORY ) and its unique ID. We will come back on this just now, lets have a quick overview of the ot
e trivial fields.
which gives you the date at which the image was created. I insisted on image because what it gives you is the date at which
ge was built from its Dockerfile, not the date at which the image ended up on your computer (its not the last modified timestamp on your f
he image was created by somebody 10 months ago and you have just downloaded the image 10 minutes ago, the CREATED field will be set to
is the size of all layers composing the image.

oure like me, one, two or three things should be bothering you right now:
REPOSITORY

rather than NAME

IMAGE ID
TAG

thing?

hy a REPOSITORY field and not a NAME field, Also Whats that TAG Thing?
uld tell you that an image name is the combination of the REPOSITORY and the TAG , but that would piss you off even morebut its true, so keep
d. While that doesnt explain the rest, it does explain why there isnt a NAME field: the images name is the combination of the REPOSITORY
, which are separated by a colon, so the two images names on the above example would be solita/ubuntu-systemd:latest

e that this is a technicality, most of the timeunless explicitly statedwhen I refer to the image name, I will in fact refer to the REPOSITORY
things like the image named nginx.

s begin with the TAG as it is the simplest. The TAG is really a version of the image. For instance, if you search the docker hub for an image nam
untu, you will find several TAG s (versions): 14.04 , 16.04 , latest , etc. Please keep in mind that from now on, on docker syntax, a colon separa
mages name from its tag. So when I say that you will find three TAG s or version named ubuntu:14.04 , ubuntu:16.04 and ubuntu:latest
ious what part is actually the TAG .
ase take a minute to familiarize yourself with that because from now on, I will take some language shortcuts.

is really a version of the image. Suppose you build a custom environment for your software, so you install the needed libraries
endencies. You might want to make two tags: development and production . In the development version you will install everything the
ironment has, plus all -dev version of the libraries (I suppose here ubuntu is the base image) and some additional debuggers, valgrind, etc.
version will only container the runtime libraries to distribute to clients.
ce its basically the same environment that you build, it makes sense to call the image by the same name, and make it a different version. Th
first purpose of the TAG field.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 13 of 23

second purpose is to ensure immutability. Let me explain: weve already seen that docker was about creating some environment that
mpletely control: you know exactly what is installed and where. This is very handy: you develop and test your software in that contro
ironment, then you can ship your code/app in that same docker environment to your client and youre guaranteed it will work.

s suppose you built your environment from the ubuntu image and its 2015, so the ubuntu version is 14.04. Everything works well and youre hap
w comes April, 2016 and ubuntu releases their 16.04 version. Surely some people will be interested to have a docker version of that. But in
e time its not wise to update the image so that ubuntu is now a 16.04 system, because your code might very well break. So its important that
system. And unless you know what you are doing, when you write Dockerfiles to build some images, you should always
from which you base your image.
s is because everywhere in docker, when you refer to an image, if you dont specify its TAG , its considered :latest by default. So if you wan
wnload an image from the docker hub and you dont specify the TAG you want, it will download the latest tag.

s is all very nice, but whats this weird thing about a REPOSITORY ? It has to do with several things, and Im not sure which one is predominant, if a
ets see them all.

t of all, if you hang out on the docker hub and search for some images, you will find a pattern: some images are just one namelike ubuntu
nxand some others are with a slash, like solita/ubuntu. The reason is that usually, single names like ubuntu and nginx indicate that th
images. So you know that when you download the ubuntu image or the nginx image, its a docker image that was built by the offi
ntu or nginx team, so you should be able to expect a decent default configuration.
e that as far as I know, this is not enforced by Docker, there is no real guarantee, its just that it seems to be a pattern, so take this statement w
e and always double check. On the Docker hub, you have a a number of stars that were given to an image: the more stars an image has,
e it is popular. Likewise, docker tells you how many times this image was downloaded, so its an indicate of the images quality too.

en a userlike you or mewants to upload one of its images on the hub, he could very well be uploading some shady shit, imagine if that
med his image ubuntu : you would be thinking youre safe where you are in in fact using some unknown code. This is why you need to speci
efix name. If you want to push on the docker hub, you first need to create an account, say your account is peroxide, then the images you
h should be named peroxide/image-name. Another bonus is that it allows to direclty see all images uploaded by a same user: all of your ima
be prefixed by peroxide , so thats another reason.

e that on your computer, you can name your image however you want: you can build an image and name it ubuntu or nginx
ectly valid. Its just that you wont be able to push it on the docker hub, in order to do that, you will need to rename it.

hats about it for nowwe may come back to revisit this when/if we talk about docker registries, but this is a topic for later.

in, let me insist on the fact that you need to be clear about these concepts, because from now on, I will use name to denote the
will say things like the images name is nginx, where I should really be saying the images name is nginx:latest, but most of the time, its c
ugh so we dont need it.

technically, the images name is the combination of the REPOSITORY and the TAG , but most of the time, when its not ambiguous, well sim
TAG is a version of your image, so instead of having several images named web-app-dev and web-app-prod you can have the same
ge name but wit ha different TAG like web-app:dev , web-app:prod .
e that you can use it to tag your image based on some version of your libraries, like web-app:1.0 , web-app:1.5 , etc.

hat is an IMAGE ID and Why Do We Need It?


serves a very useful and important role: it uniquely identifies an image. In fact, everything in dockerread: images, contain
works, volumesall have an ID that uniquely identifies it. Its a way to identify or refer to objects (images in our case) so you can know
you are dealing with the right thing.

must seem redundant at this point because from what we have seen so far, the name should suffice, but if you have been paying attention,
ady have the answer to that.

member that I said you could name your image however your want on your computer, you could even name it ubuntu if you wanted? Well th
docker images , and you see an image named ubuntu , how can you know if its yours, or the official one? This is where the

IMAGE ID uniquely identified an image, but I never said it was random, right? Its because its not: this IMAGE ID
mputed from the Dockerfile content, and its deterministic: with the same inputs, you will create the same image, and this you will have the sa
. So if you are in doubt, you can check the official ubuntu s IMAGE ID and compare it with your own. Actually, this is exactly what docker d
en you try to download an image: it looks at your IMAGE ID s and see if you already have it or not, it doesnt care about the name.

ther use case: suppose you built your image and named it web-app (since you did not specify a TAG , its latest by defaultget used to it!). Then
out that you need to change something in your image (perhaps install another library, or change an environment variable, etc) such tha
snt make sense to keep using the image without the modifications.
you make the change, and you want to rebuild the image, with the same name web-app (and same TAG , latest ). What should happen? Since
ady have one image named web-app:latest , should it conflict? Should it tell you that you already have an image named like this? It very well co
e been like that, but that would mean youd have to delete the image before trying to rebuilt it. Its possible, but trust me: when you are a bit m

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 14 of 23

anced in docker and you start building your own images, this scenario will happen a lot! Build your image, check it, notice something is miss
nge it and rebuild again. It would be a huge drawback to have to delete the image manually everytime.

des, its dangerous: suppose that you cannot, for the hell of god, manage to find out the missing library and you always fail to build the n
ge; or its taking you longer that expected. What if youd like to temporarily fallback to your previous, suboptimal but still working image? W
cant because youd have deleted it!

of this to say that you dont have to delete your old image in order to rebuild one with the same name. What happens when you rebuild a
image and you already have one? Simple: your previous image will lose its name, and your new image will be called web-app:latest

at does it mean for an image to lose its name? It means that:

e>
e>
ta/ubuntu-systemd
x

<none>
<none>
latest
latest

ed0206fc5a9c
4e5c2e3d6118
58676da6fce1
0d409d33b27e

11 days
11 days
2 weeks
3 weeks

ago
ago
ago
ago

353.5 MB
122 MB
122 MB
182.7 MB

e you can see that when I run docker images to list images on my computer, I have two images named <none> : they dont have a name anymore.
ause they had a name, but I built new images with the same name, so they lost theirs.

course, theres nothing special about them, you can still create a container from them, but youd use their IMAGE ID to instantiate, so rather t
docker run web-app , you would do docker run 4e5c2e3d6118 for instance. This way it allows you to still use the image.

w you know what this IMAGE ID is and what it is for.

IMAGE ID s you see when running docker ps are shortened version of the TAG , exactly like git abbreviates the commits hashes. If you wan
the real, complete ones, you have to use the option --no-trunc : docker ps --no-trunc . Its useless most of the time, but still good to know, jus

have been talking about images for quite some time now, but we need a way to get some images. Two ways: you can either build them
wnload them. Downloading an image is called pulling itand uploading is called pushing it.

default, when you dont do anything fancy, images are pulled and pushed from the global Docker Hub registry. This is a giant repositor
licly-available images.
can search for images with docker search <name> , this will search and return a list of images that match your (partial) name on the Docker Hub.

of images on the hub, so you might want to filter them a bit. You can add the --automated option which will only show you automa
ds; use it like this: docker search --automated <name> or you can use -s or its long equivalent --stars with a number to filter images that h
eived some stars from the community.

sonally I like to search with docker search -s 1 <name> to filter out images that have been uploaded as a test, never used an which are
aningful. This already filters out quite a lot.

rder to push images to the docker hub, you first need to create an account on it, and then you need to log in, this is done with docker login
re logged in, you can push your image with docker push account/image-name[:tag] . If you dont specify the tag, as usual, it will push :latest
ge is pushed to the repository, its now available to the docker hub, everybody can search for it and pull it.

its possible to create a private account on the Docker Hub, in which you can push your images but they can only be seen from you
horized accounts. I wont cover it because its not really useful for the majority of people, but go read about the Docker Hub private accounts
docker site if you want.

s part is important because docker is notorious for eating disk space faster than an thirsty English can drink a pint.

k space is consumed mostly in two ways: images and volumes.

member what we have seen about the ability to built an image and name it the same as another image, in which case it loses its name and beco
? Well as we have seen, these images still exist and still take some space. Even thanks to the layer system, when you use docker for quite so
e, there are always some layers that end up being unused.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 15 of 23

command has a pretty useful option -f or --filter that you can use to, err filter out displayed images. Now I will admit t
ker does lack some documentation about which filter you can use, but there is one particularly useful: dangling=true . This will display only ima
are not used anymore by anyrunning or stoppedcontainers.

e that it doesnt necessary mean that you should delete it: it might just be that right now no containers use this image, but you still wan
pefully this is very rare and should be temporary.

at you want, most or the time, is delete these dangling images because you dont need them anymore and they pollute your system.
mmand to delete an image is:

i stands for rm (remove), i (image). In place of <image> you can put the image name (be
be careful: if you dont specify a tag, it will delete :lat
(safer: you dont have any chance of mixing). Personally, I never use the image name when I delete, I always use the images ID (t
can copy/paste with the mouses middle button, you know).

keep your system clean, it is advised to periodically remove the unusedor danglingimages. Sometimes, running docker images -f dangling=true
you a lot of images, and this is a pain to select each one of them and delete them one by one. There is a shortcut: the docker guys anticipa
rything. There is an option -q or --quiet which outputs only the IMAGE ID s.

docker images -q and you will see. This is much less readable, but incredibly useful for a machine. Now you can pass this list to
ete all dangling images in one line.

o ways (depending on your preferences):

can generate the list of IMAGE ID s only (with the --quiet option) and use that as an argument to docker rmi , as such: docker rmi `docker images

you can generate the list of IMAGE ID s only (still with the --quiet option) and pipe that to xargs , as such: docker images -qf dangling=true | xargs do

h commands achieve the same result: remove the dangling images.

y, we are done with images for now, I believe you have the basic tools needed to deal with images for your docker operations.

we saw how to interact with images, this is cool. But images by themselves are pretty uninteresting: we cant do much without
ances of images.
s see some cool commands!

l obviously the first thing to do with containers is create them, or instantiate them from an image. By the way, for a quick vocabulary checkpo
already saidalbeit never formally definedthat an image is built.
ontainer, however is said to be either created, instantiated or run. Now that it is said, be ready to read all three terms indifferently.

how do we create a container? Its the command docker run . It takes two parameters: the first one is mandatory: this is the image to instant
container from, the second one is optional and is the command to run inside the container.

ait What? I Thought You Said Container Were Just a Process?


glad you noticed this: it means you are following!

eed, when we build an image, we usually specify the command or process that it should be running. Except we dont have to.

w come? Well its easy to understand, and besides we have already seen this when we created some containers based on the ubuntu image.

idea is that Docker is a tool that allows us to define and build a controlled, determined environment: a set of tools and variables defi
ording to your preferences or needs. So building an image is really defining all of these settings. But once you have that, nobody forces you
ually run a process inside it. Just having an image without a process running inside it is pointless in itself, but maybe you have several proces
you want to test and they all need to run in the same environment, in which case you will create several Dockerfiles that will use this one
have a defined process to run.
what you do is create the environment in your Dockerfile, build the image and you will only run the process when you create the container: th
valid use case of an image without a defined process.

docker run can take an additional parameter: the commandor processto run inside the container. Note that in the case of an im
h a defined process, you can override it with that additional parameter, but it takes some getting used to, because there are some peculiari

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 16 of 23

we will address a bit later.


the time being, lets use the second parameters only on images without a defined process.

ets say we want to run a process based on the ubuntu image, the command would start with docker run ubuntu and since the ubuntu image doe
ne a process by itself, we have to give it a command. Lets do that, lets start bash in it, so we can have a shell:

cker run ubuntu bash

argh! What just happened here?

dont have a shell, and try running docker ps : our container is not listed here, which means it is not running. We can check this with:

AINER ID
52473c3f

IMAGE
ubuntug

COMMAND
"bash"

CREATED
12 seconds ago

STATUS
Exited (0) 11 seconds ago

PORTS

NAMES
big_panini

what happened?
s take a little look about what information is available to us: first column is CONTAINER ID , as we previously saw, this is just the unique ID for

: ubuntu, so far so good, this is what we wanted.


tells us the container was running the program bash , again: so far so good.
12 seconds ago, seems alright, obviously your value here might change depending on when you ran docker ps -a .
gives us the state our container is int, currently it is exited so its not running anymoreand the number between parentheses
return code the running process returned when it exited. In this case 0, which usually means that the program returned without any error.
is empty and this is perfect because thats out of this articles depth for now, but we will come back to it, promise. By the way, if you are a l
enturouswhich I recommend you to be!and you tried with other base image you might have something written in PORTS . Dont worry, w
me back to this.
NAMES I have no idea why sudden plural heregives you the name of the container. Note that we did not specify any, so doc
erated an random one. Its always two words, separated by an underscore and its often something humorous. So its a nice feature.

his is All Very Good, But I Still Dont Have a Running Container!
yes, that is perfectly right.

what happened here? Because of the return code being 0 , everything seems fine. In fact it turns out we did something special. As the runn
cess we asked to run bash , which is a shell. And a shell doesnt do much by itself, just like that. It needs a command to run, something to r
mething to process.

are not wary of this because usually when we run shell, it waits for us, with a blinking pointer. Most of the time, its because we started the s
terminal emulatorGnome Terminal, Konsole, Guake, etc. And these terminal emulators run the shell in interactive mode.
not here to make a course about shell and terminals, but for the sake of this article, lets assume that all interactive mode means is that the s
ts for commands on stdin the keyboard.

has an option just for doing that, and its the -i optionlike bashs. As always, with every docker commands, you can get information w
docker run --help .

e we see the option: -i, --interactive Keep STDIN open even if not attached . Its pretty clear: by running a container with -i you will attach

cker run -i ubuntu bash

Sorry.
this time, it fails differently, right? Different means new information and so its interesting. You must have noticed that this time it seem
ng: you dont get a shell, but you dont get back your original shell either. Its seems to be stuck.

the good news is that if you open another terminal dont kill the one which seems stuck and run docker ps you should have something like:

AINER ID
fd3f988b

IMAGE
ubuntu

COMMAND
"bash"

CREATED
3 minutes ago

STATUS
Up 3 minutes

PORTS

NAMES
tiny_engelbart

that I did not need to use the -a option this time: the container is running. As it is confirmed by the STATUS : Up 3 minutes ago . Which means t
been up for about 3 minutes.

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 17 of 23

what happened again? It all works as intended: we have a running process bash inside a container. This shell is waiting for user input on
stdin and we have bound the containers stdin with our own terminal. Its just missing one little thing: a TTY!

it makes sense: if you have a remote server, VPS or something, try logging in ssh to it, and once youre logged in, run who . Its a command wh
s you who is connected to the server. And you should see your user (potentially others, if others are logged in) and a TTY number.

in, no there to make a course about how Linux works, but you can think of a TTY roughly as a connection, a slot in the server.

continue the rough analogy, since we havent allocated a TTY to our container, you can think of what is happening with our container as this:
ning a shell which is waiting for input from the keyboard, but there are no users connected, no slots taken. So well, theres no chance it rece

he way, if you want to unstuck your stuck container, try CTRL + A; P (this is CTRL + A , release, then P ). The documentation says it allows yo
ach from your container, but it almost always fail on me. If it fails for you, then you have to either kill your terminal, or from another term
on your stuck container.

ast but not leastI promise this will work after thatwe have to allocate a (pseudo-)TTY. And this is with the -t option. In fact this is
you will always group -i and -t together with -it or -ti .
we can do some serious stuff: docker run -it run ubuntu bash .

yeah! It works now: we have a shell inside our container!

can check it as usual:

AINER ID
691e64bc

IMAGE
ubuntu

COMMAND
"bash"

CREATED
13 seconds ago

STATUS
Up 11 seconds

PORTS

NAMES
insane_carson

m here on, everything you do in this shell is done inside the container, so it should not affect your system. Try playing a little,
tions, create, modify and delete files, have fun!
can exit the container with CTRL + D which exit the shell. And since the shell is the only running process, it will stop the container.

st Some Bonus About Naming Containers


ess you recently received molten lava in your eyes, you should have noticed that every time you create a new container, it is given a (fun
me. Its fun when you are debugging or beginning with docker, but pretty quickly youd want to name containers yourself.

rder to do that, its as simple as passing the --name your-custom-name-here option to docker run . So you can write docker run -it --name something ub
and we will have the same working example as before, only named something.

tle bit of warning, though: you cannot reuse a name, so if you create your container with --name something just like we did, and then you stop y
tainer, you cannot simply re-run the command as-is, because docker will complain and tell you the name something is already taken b
pped container.
can then delete the container (well see below how to do that) and then you can reuse the name. Or you can be clever and automatically de
container when it stops. This feature (again: huge thanks to the docker guys!) is very useful when you want to debug your image or when
t to perform one-off commands on a image.

me explain: suppose you have a fairly complex image that you set up correctly with your Dockerfile, then you want to check something,
ether your host and the containers based on your image have the same time. What you want to do is create a container from the image, run
exit the container, then you can destroy it because you dont need it anymore. You can do all of this with the --rm option. This option dele
container when it exits. So it is never Stopped or Exited. Its done with docker run --rm --name foo ubuntu bash for instance (the --rm
ortant bit of course).
r you did this, if you open a new terminal and type docker ps , you will see your container with the name foo. But when you exit the conta
, the container will exitsince it has no other running processesand it will immediately be destroyed. So you can use arrow up
un the exact same command and it will workno more conflicting name.

careful with the --rm option: it does come handy to keep a clean systemand avoid having dozens of stopped, useless containersbut make s
er to use it with container from which youd like to recover data before destroying it!

hat was the basics of container creation. The two most common forms you will use are docker run --name container-name <image> for images who h
efined process and docker run --name container-name -it <image> <command> for when you want to interactively run a specific command inside
tainer. Most of the time, the command will be bash .

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 18 of 23

w that we have been playing a bit with docker and containers, you should have a number of stopped containers, leaving your system in a pr
docker ps -a to list them, and youll see what Im talking about.

time to learn how to remove containers, well this is easy: docker rm <container-name> , of course, as always, <container-name> can either be
tainer name or the container ID.

ugh it is fairly easy as weve just seen, a couple of things of importance:


you cannot remove a running container, if you try this, docker will complain with

Failed to remove container (<container-name>): Error response from daemon: Conflict, You cannot remove a running container. Stop the container before attempting r

As it is explained by the error message, you should stop the container before, with docker stop <container-name> . In the case where you want to
messy, or you need to be fast, or you dont care if some baby dear die, you can use the -f option to force the removalin which case docker
SIGKILL to terminate your running process quick and dirty.
Bottom line: stop your containers before destroying them, but there exits the possibility not to do so.
When a container makes use of Named Volumes, they wont be deleted when you run docker rm on it. This is a security feature so that e
though you managed to mess your typing and deleted a container, its data is safe. At least the data stored inside the Named Volumes, but h
since youve been paying attention, you know that containers should be ephemeral and should be able to be stopped, destroyed and restar
with little to no consequences, which meansamong other thingsthat all important data should be kept in Named Volume; its all fine,

absolutely sure that you want and can destroy everything regarding the container, including the persistent data inside the Nam
, then you can pass the -v option (for volumes) to docker rm , and it will delete everything about the container, persistent data included
ve made a couple of wrappers to deal with docker commands for the people in my company, and I have not enabled the possibility of running
with the -v option for several containers at once, and I have actually been so far as to name the command nuke and ask the user fo
firmation password before executing.: this is how dangerous this command is.

a lot on this point because when you are more familiar with docker and you work with it everyday. You start to really enjoy
tainerization it provides, the isolation and the fact that your data is always securely stored inside a Named volume. And there comes a mom
en after too much debugging, too much coffee, and too much staying-late-or-rather-early you will issue that docker rm -v <container-name>
ortant container. And you will regret it. A lot.

s will be a short little paragraph, because its pretty straightforward. When you have a running container, you can stop it with
. Whats going on behind the scenes is that docker sends a SIGTERM signal to the running process of your container -more importan
we will see it later, it sends the SIGTERM to the process that has PID 1.
r a given time, if the container has not actually stopped SIGTERM is a signal that can be caught by a program, so it can be ignoreddocker
which cannot be dealt with.
default, this timeout is 10 seconds, which you can override with the -t or --timeout option: docker stop -t 3 <container-name> .

ting a container is done with the docker start <container-name> command. Simple. When its a container running a server or a process on its o
docker start and its running.
en you need to feed some inputthink interactive shell like before, you need to do the same kind of wizardry as you did before with
ractive and tty thing.
re is the same -i or --interactive option we saw with docker run , annnnnd now we have a severe case of amnesia! For a reason that I d
-t option from docker run , which allowed us to allocate a (pseudo-)tty to attach to the container has transformed and is now nam
. I cant explain it, so youd just have to remember it.
docker run -it <image> <container-name>

to create the container, and its docker start -ai <container-name> to start it when its stopped.

ember it and youll be fine!

un is Actually Two Commands


docker run was used to create or instantiate a container from an image. Actually I have been slightly lying to you.
ually does two things: it creates the container from the image (at which points the container is in Created state) and then it starts
can only create the container without starting it right away if you need to do that. Its done with docker create <image> . When you do that, y
tainer will be created, and when you want to start it, use docker start .
docker run is a combination of docker create and docker start .

s Possible to be More Radical


saw that in order to stop a container we had to use docker stop <container-name> . But it takes a timeout, by default 10 seconds. Sometimes,
lication we dockerized doesnt handle SIGTERM well. It may be that its simply not designed toin which case you should go back, change
dle it!or it might be that the application crashed to a point where theres no more hope.
hese case, you can save a little time and directly send a SIGKILL . Note that its dirty, because it leaves no chance at all for the application
cefully shutdown. So you should only use that in cases of emergency.
docker kill <container-name>

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 19 of 23

ther more graceful use of this command is if you want to send another signal to your containerized process, like SIGUSR1 or any other U
al, in which case its the -s or -signal option, used like this: docker kill -s SIGUSR1 <container-name> .
everyone needs it, but its still good to know that Docker has some tools to do that.

saw how to create containers from an image, how to stop a running container and we saw how to delete a container. Lets see how to

re is the vanilla docker ps which lists all running containers. We can add the -a -or --all option to list them all, running and non-runn

already saw the -f -or --filter option that allowed us to filter the output, we used it to display danglingi.e. non-usedimages. Well ther
option for containers too. You can filter by name, etc.

ther option that you will find useful for debugging is the -l or --latest option. It only shows you the latest created container. This useful w
are doing tests and debugging and you find yourself starting, stopping deleting the same container all over again and you want to inspect so
es inbetween.
he same spirit, you can use the -n <number> option to display the <number> latest created container. Its like -l , but for several containers. I ne
d it myself, though.

ther nice option for inspection is the -s or --size option. When you use this option, docker ps will add another column to the output,
play the size your container takes on disk.
o values are included: the size on the containers read-write layer, and the size of the base imageits the one between parentheses.
a nice feature because it allows you to see if you have images growing out.

s take an example, first we create a container from a base image, lets use nginx to change.

cker run -it --name size-test nginx bash

w you should have a prompt like that:

@ac71e120b023:/#

re inside the container. Lets exit it with CTRL + D . Now we want to display it, with its size. Since this is the last created, we will use
ortunity to use the -l flag that we just learned about:

AINER ID
e120b023

IMAGE
nginx

COMMAND
"bash"

CREATED
About a minute ago

STATUS
Exited (0) 20 seconds ago

PORTS

NAMES
size-test

SIZE
0 B (virtual 182.7 M

just so there is no ambiguity here: the docker ps is the command to display the processes, and the -ls thing is really two short options:
that are chained together to avoid repeating the dash. It has nothing to do with the ls Linux command, right?

k to business. So we have our container, which is based from the nginx image, whos named size-test as we intended.
w for its size: its 0B (virtual 182.7 MB) . What that means is that the containers read-write layers size is 0 bytes, and its base image is 182.7 MB
ply means that we have added nothing compared to the base image.

can quickly check that that we are saying is correct:

SITORY

TAG

IMAGE ID

CREATED

SIZE

latest

0d409d33b27e

3 weeks ago

182.7 MB

that is correct: the base nginx image is 182.7MB.


y so we have a container, that is, for now, a perfect copy of the image. So perfect in fact, that for now, thanks to this layer thing, the containe
ually nothing more than the image itself: it uses its layers, andat the momentnothing else.

s add some data in that container. First, lets start it, since we want to have a shell inside, well use the -a and the -i options:

cker start -ai size-test

@ac71e120b023:/#

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 20 of 23

we used this opportunity to make use of what we previously saw: we just started a container and we attached ( -a ) to it, and started an
) session. We are presented with the shell.

inside shows that we are at the root of the file systemthe prompt indicated this, but now we can see for ourselves how the filesys
ectly mimics the filesystem in an actual Linux installation.

@ac71e120b023:/# ls
dev home lib64 mnt proc run
srv tmp var
media
opt root sbin sys usr

s create a file: touch testfile.txt . It creates an empty file. Lets exit CTRL + D and docker ps -ls again:

AINER ID
e120b023

IMAGE
nginx

COMMAND
"bash"

CREATED
16 minutes ago

STATUS
Exited (0) 3 seconds ago

PORTS

NAMES
size-test

SIZE
25 B (virtual 182.7 MB

nk the new size you will have here may vary depending on the backing filesystem. It doesnt matter: it simply shows that creating an empty
s add size to the container, because there must be something in the read-write layer of the container. As you see, creating an empty file doe
e much space, if we had copied a 1kB file in the container, it would have grown by approximately 1kB.
--size

option is important and you should use it regularly to check on your containers.

s get back to our docker ps options. There is the same -q or --quiet option that we saw for images, its useful to pass that to docker stop
commands for instance.
ou want to stop all running container in one command, how would you do that? Easy: docker ps -q | xargs docker stop or docker stop `docker ps

re is one last interesting option, --format but we will see it at the end of the next section about inspecting a container.

are approaching the end of this article! This is good newsor not?but usually, when its the case, people tend to rush the last paragraph
it over with. This is bad, so I suggest you take a breakremember what I said about letting your brain process the information a little?
just for that, Ill display my usual cup-of-coffee-picture:

specting a Particular Container


ack, still with us?!
we learned how to get some useful information with docker ps . But sometimes, we want really complete information about
tainer. Lets see how.

that purpose exists the docker inspect command. This command outputs a lot of information, and I mean it: a lot.
docker inspect <container-name> . I wont paste the output here because that would grow the articles length for nothing.
docker inspect

gives you all possible information about a container. Often, there are too much, so its a good idea to be able to filter

rder to do that, you can add the -f or --format option, with a Go template. You can go read and learn about Go templates, but as a start, kn
its usually used like this: --format "{{<template>}}" .
<template> is generally the name of the section you want to display, preceded by a dot.

instance, if you run docker inspect <container-name> and you scroll back to the top, there should be a section called State. You can filter and disp
y this section with --format "{{.State}}" . Try it: docker inspect --format "{{.State}}" <container-name> you will get something like:

ning true false false false false 12216 0 2016-06-27T17:07:19.517535765Z 0001-01-01T00:00:00Z}

w of course, you need to know the fields to know what corresponds to what. And if you are interested only in knowing the status of a contai
docker inspect -f "{{.State.Status}}" <container-name> and will get running .

might have noticed that the full output of docker inspect <container-name> is a huge JSON string. This is very helpful because other programs
tom wrappers or 3rd party toolscan call docker inspect and directly parse the output as a valid JSON object.
when we filtered with docker inspect --format "{{.State}} <container-name>" all we got is this senseless string: {running true false false false false 122
-06-27T17:07:19.517535765Z 0001-01-01T00:00:00Z} .

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 21 of 23

lost the JSON format, and we lost the meaning of each field. To get it back, you can use the json Go template function and insert it just bef
filter, like this:
docker
inspect
-f
"{{json
.State}}"
<container-name>
and now you should have output like t

atus":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":12216,"ExitCode":0,"Error":"","StartedAt":"2016-06-

7:07:19.517535765Z","FinishedAt":"0001-01-01T00:00:00Z"}

akes much more sense this way.


king good use of the --format with Go template is a science on its own, but its not the focus of the article. In fact, I seldom use more comp
ings that what Ive just explained, and Im fine. So I think its okay to start with that and let you experiment with chaining fields. Basically, you
mat any section.

list of features is long and I could easily have doubled the length of this article. But youve got to stop somewhere, dont you?
cided to include this section because its something that I have discovered once I started using Docker in a bit more advanced way, but since
y simple and potentially life-saving, lets see it here.

tainers are meant for isolation. So theres little communication possible between the host and the containers, because thats what they w

sometimes, you need to transfer some files. You might want your container to use a new dump of your database, which you did not includ
Dockerfile, or conversely, you might want to extract some logs from the container. It all boils down to be able to run cp between the host

docker guys invented the docker cp commands for that. Its easy and its syntax is the same as copying over ssh with scp . There is no differe
ween copying a file and a directory, no need for any -r option. So in this section, when I use <file> it means either a file or a directory.

copy a file from the host to the container, its docker cp </path/to/file/on/host> <container-name>[:</path/to/destination/in/container>] . So for instance,
ql_dump.sql size-test:/home will copy the file sql_dump.sql from the current directory (in the host) into container size-test, and put i
inside that container.

extract some files out of the container, the syntax is simply reversed: docker cp size-test:/var/log/syslog ./ which copies /var/log/syslog
tainer size-test to the current directory on the host.

s should help you deal with almost all use cases.

we have learned quite a few useful docker commands, right?


en you start playing a little with docker, there should be a use case that will start bothering you. Its about maintenance.

ure a dockerized applicationthis is the term we use to say we run a particular application/software/process inside a docker container;
ance if you run your postgreSQL database inside a docker container, we say you dockerized postgreSQLon which something went wrong or
ch youd like to perform a check.
r application is self contained, so it runs its own process (say postgreSQL, or nginx, or apache) and you need to go take a lookat the log file

w would you go about that? What you really want to do is run a command inside a running container. Actually take a second or two to think ab
, and try finding how you would do that. Im waiting.

reason I gave you time and made you explicitly think about it is because I wanted you to think ah yes, we have already done that: its
then scold you very hard.

y say you learn better by making mistakes, I was trying to have you make a mistake so you can better understand the difference I am goin

If you did not actually take time to think about this, well I cant force you, youve just missed an opportunity to learn. Its no big deal, you
still learn the right way to do it, but it wont be printed in your brain as hard as I had hoped.
If you did take some time, but had really no ideaand did not even think about docker run it probably is because you read the article too
and did not take the proper breaks each time. Remember that even though docker makes things look easy, it is a complex and diffi
beastthe article is not named Taming The Beast for nothing!
If you did take some time and thought that docker run would do the trick, its perfect, you got it wrong like I intended, but now you will be m
receptive and I think you will never mix the two commands againIll introduce the real command in a second.
If you did take some time, thought about docker run and rejected it for the reasons I am about to explain, then congratulations! Either youre
a complete docker beginner, or you are very astute, followed and understood the article very well. Congratulations to you!

y so all of this to say that docker run is not the right solution. Why?

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 22 of 23

member how and why we used docker run ? We used it in two flavors: docker run -it ubuntu bash or docker run nginx . The former specifies
mmand to run bash while the latter doesntits because the command, or process to run is already included in the Dockerfile. But both flav
he same thing: they run a command in a new container.

docker run , we had a new container created from the base image, here ubuntu or nginx . Remember: we even used --name
w container so it doesnt get a randomyet funnyname!

at we want here is run a command inside an already running container, this is very different!

ou have a running postgreSQL container and you want to check its /var/log/syslog ; if you do docker run postgres-base-image cat /varlog/syslog
do is create a new container based on the postgres-base-image , and display the content of this brand new /var/log/syslog . We dont care about t
t we want is to see the content of /var/log/syslog for the <already-running-postgresql-container> .

he command to run a command in an existing, running container is: docker exec <container-name> <command> .
s is a new docker command, and one that you will use a lot.

easy part first: if we want to see the content of /var/log/syslog inside our container named website-db , we will run docker exec websit
. Simple.
ewise, you can run anything, want to check the disk space? docker exec website-db df -h . Easy.

se are one-off, non-interactive commands. What if you want to start a shell inside the already running container? You cant simply do
for the same reason we already saw: you need your session to be interactive and you need to emulate a (pseudo-)tty. Well g
and -t options again.

tainer-name> bash

o log in in a containerby log in I mean start a shell inside the container, on which we can perform operationsyou just have to run
. Simple!

<container-name> bash

will most likely use this command a lot, so I advise you to make an alias. I have aliases docker exec -it with dexit myself, because it saves so m

he other articles, unless obviously not the case, docker exec -it <container-name> bash will be what I mean when I talk about logging in a container

now have a pretty big set of tools and commands to use with docker, but we are still missing an important aspect: logging.
me say right now: logging inside a docker container can be done in a multiple number of ways and is a science in itself. I am not talking ab
anced logging in this section, only the basics.

ker provides the docker logs command. It is pretty useful but not magic; by that I mean that it wont know what logs are. People usually exp
to show them the errorwhatever that meansor the interesting logs. But it doesnt. Not necessarily.

me write it once and for all, and please keep that in mind at all time: what docker logs does is redirect the containers stdout and
. In another world, everything that was written or is being written to the containers stdout and stderr is displayed on your console.

not the content of /var/log , nor the content of the syslog, nor anything. By defaultread: unless you made explicit settings changes
will do.
when you design some applications that you know will be dockerized, you redirect the logs output to stdout or stderr so that you can fe
docker logs .
if you designed your application to write logs to a file only, then docker logs wont give them. So you need to keep this in mind: not because
doesnt show you the error doesnt mean it did not appear.
possible to set up more advanced, complex and comprehensive logging systems with docker, but this is for another article.

w some useful options. By default, docker logs <container-name> gives you a snapshot: its like running cat on a file. If you want to
aning that new entries will be displayed as soon as they appearrather than the next time you call docker logs you need to pass the

problem with that, is if your container has been running for quite some time and/or is verbose, when you run docker logs -f <container-name
flood your console with thousands of log entries and before you actually reach the end, it might be a loooooong time.

ce again, the docker guys thought about this and implemented another option: --tail which you give a number and docker logs will only show
entries. So I always advise you to use --tail N when you look at some logs, because you now in advance the number of lines it will displ
pical call might look like: docker logs -f --tail 50 <container-name> . It will show you the last 50 entries, and stay in live mode.

ther useful option is the -t or --timestamps (mind the plural) that will show the date before each entry, pretty indispensable if you ask me!

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

nschoe's labs - Docker: Taming the Beast - Part II

Page 23 of 23

alternative to specifying the last N number of lines might be to display logs since a certain date. Well Ill give it to you: the docker guys alre
ught about that too!

--since option, which takes a timestamp.


l it sucks because timestamps are not human-friendly. We would like to be able to say show me the logs since yesterday. Well it turns out,
option is very usefull here.

uick introduction if you dont know about this feature. date can convert a date into the format you want, its done with the +
pass it the standard date symbol. So you can do date +%m to get the current month, or date +%m/%d/%Y to get the month/day/year.
urns out there is the %s option that returns the timestamp. So if you do date +%s you will get the timestamp (with seconds resolution) of

w it turns out that date can take an additional parameter -d to apply the previous operation on a specific date. And the magic is that a lo
sing is done so that -d can not only support timestamps, but also string. Like yesterday. So if you do date +%s -d yesterday it will give you
) of yesterday.

w that you have that, its possible to use that ouput as the input for docker logs . Look:

er logs --since `date +%s -d "yesterday"` <container-name>

and quite logically it will dispay the logs since yesterday. Replace yesterday by 2 days a

3 weeks ago, experiment a little!

s is very handy. The commands being a bit verbose, its advised to alias them, of course.

e we are: weve come a long way, havent we?


first two articles wereI thinkthe most theoretical. But it was necessary and I tried to include as much examples and actual commands
ld, to make it the less boring possible. I am aware that the article is long, but there was a lot to learn!

pe I made the docker concepts clear, this was the goal of this article. From now one, the articles will be shorterat least I think!and
e applied. I believe I have covered the majority of the docker concepts that can be applied to build complex setups. Its important that you k
rything we saw in mind, before now, in the following articles, we will apply all of these concepts.

usual I will try to give as many examples as possible to give you insights and material to go on. Im not yet settled on the next article, but there
h probability that we will cover the process of building images and writing Dockerfiles. If you have a personal suggestion, dont hesitate to
and Ill see what I can do!

http://nschoe.com/articles/2016-07-03-Docker-Taming-the-Beast-Part-2.html

10/24/2016

You might also like