This tutorial guides you through creating your first Docker container. It was heavily drawn from the official Docker Getting Started Guide.
This tutorial comes with a troubleshooting section that lists the most common problems.
Virtual Machine emulates a fully working isolated OS. It requires the same resources from the host as a normal OS would, meaning it would load its kernel into the memory, load all necessary kernel modules, all the libraries to work with the software and only then will allocate resources for a user application. If you run 100 identical VMs, they would occupy 100 times more resources.
A Docker container, on the other hand, runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, making it lightweight.
The difference can be intuitively demonstrated by the following image
Source: Docker: Get Started
Follow official guidelines to install docker. Mac and Windows users should install Docker Desktop. Linux users should follow instructions for servers.
Linux users should complete istructions to manage Docker as non-root user post-installation instructions.
We assume you have already installed Docker on your OS. Open your favorite terminal emulator and try running
Avoid using Docker in the form of snap container. People reported multiple issues with this type of installation.
16/09/2019
Test that your installation works by running the simple Docker image, hello-world
:
List local images and find hello-world
that was downloaded to your machine:
List the hello-world
container (spawned by the image) which exits after displaying its message. If it were still running, you would not need the –all option:
A container is launched by running an image. An image is an executable package that includes everything needed to run an application – code, runtime, libraries, environment variables, and configuration files.
A container is a runtime instance of an image – what the image becomes in memory when executed (that is, an image with state, or a user process). You can see a list of your running containers with the command, docker ps
, just as you would in Linux.
Dockerfile
defines what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you need to map ports to the outside world, and be specific about what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile behaves exactly the same wherever it runs.
Create Dockerfile
(simple text document, no extension) with the the following content
Try to understand what this file defines. This Dockerfile
refers to a couple of files we haven’t created yet, namely app.py
and requirements.txt
. Create them and put them in the same folder with the Dockerfile
.
Create requirements.txt
with the following content
and app.py
We are ready to build the app. Make sure you are still at the top level of your new directory. Here’s what ls should show:
Now run the build command. This creates a Docker image, which we’re going to name using the --tag option
. Use -t
if you want to use the shorter option.
The image is placed into Docker's local image registry:
Note how the tag defaulted to latest. The full syntax for the tag option would be something like --tag=friendlyhello:v0.0.1
.
Run the app, mapping your machine’s port 4000
to the container’s published port 80
using -p
:
If you are unsure what a port is, we have some additional homework for you :)
You should see a message that Python is serving your app at http://0.0.0.0:80
. But that message is coming from inside the container, which doesn’t know you mapped port 80
of that container to 4000
, making the correct URL http://localhost:4000
.
Go to that URL in a web browser to see the display content served up on a web page. You can stop the web server by hitting CRTL+C
in your terminal.
Now let’s run the app in the background, in detached mode:
You will get a container ID in return. You can check available containers with docker container ls
. Use the ID to stop the container:
Try uploading your image to Docker repository. The following commands are self sufficient.
In a distributed application, different pieces of the app are called “services”. For example, if you imagine a video sharing site, it probably includes a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.
Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs — what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process. And containers help to create a controlled environment for software execution.
First, verify that you have docker-compose
installed
If not, read how to install Docker Compose on your system.
With Docker it is easy to define, run, and scale services – just write a docker-compose.yml
file
This docker-compose.yml
file tells Docker to do the following:
Pull the image we uploaded before from the registry
Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of a single core of CPU time (this could also be e.g. “1.5” to mean 1 and half core for each), and 50MB of RAM
Immediately restart containers if one fails
Map port 4000 on the host to web’s port 80
Instruct web’s containers to share port 80 via a load-balanced network called webnet
. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port)
Define the webnet network with the default settings (which is a load-balanced overlay network)
Initialize Docker swarm:
Now let’s run it. You need to give your app a name. Here, it is set to getstartedlab:
You can list running services with docker service ls
. A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas you defined in docker-compose.yml. List the tasks for your service:
Alternatively, you can list your running containers docker container ls -q
.
You can run curl -4 http://localhost:4000
several times in a row, or go to that URL in your browser and hit refresh a few times.
You can scale the app by changing the replicas value in docker-compose.yml
, saving the change, and re-running the docker stack deploy command:
Docker performs an in-place update, no need to tear the stack down first or kill any containers.
Now, re-run docker container ls -q
to see the deployed instances reconfigured. If you scaled up the replicas, more tasks, and hence, more containers, are started.
Take the app down with docker stack rm
:
Take down the swarm.
Make sure you have docker-machine
installed
If not, follow the instructions for your OS here.
A swarm is made up of multiple nodes, which can be either physical or virtual machines. The basic concept is simple enough: run docker swarm init
to enable swarm mode and make your current machine a swarm manager, then run docker swarm join
on other machines to have them join the swarm as workers. We use VMs to quickly create a two-machine cluster and turn it into a swarm.
Create a couple of VMs using docker-machine, using the VirtualBox driver:
Alternatively, you can use a real servers or other VMs that have Docker installed. You can read more about docker-machine
in the official documentation. List VMs and get their IP addresses
Now you can initialize the swarm and add nodes. The first machine acts as the manager, which executes management commands and authenticates workers to join the swarm, and the second is a worker.
You can send commands to your VMs using docker-machine ssh
. Instruct myvm1
to become a swarm manager with docker swarm init
and look for output like this:
As you can see, the response to docker swarm init
contains a pre-configured docker swarm join
command for you to run on any nodes you want to add. Copy this command, and send it to myvm2
via docker-machine ssh
to have myvm2
join your new swarm as a worker:
Run docker node ls
on the manager to view the nodes in this swarm
So far, you’ve been wrapping Docker commands in docker-machine ssh
to talk to the VMs. Another option is to run docker-machine env <machine>
to get and run a command that configures your current shell to talk to the Docker daemon on the VM. This method works better for the next step because it allows you to use your local docker-compose.yml
file to deploy the app “remotely” without having to copy it anywhere.
Type docker-machine env myvm1
, then copy-paste and run the command provided as the last line of the output to configure your shell to talk to myvm1, the swarm manager.
Run the given command to configure your shell to talk to myvm1.
On Windows it looks differently
Run the given command to configure your shell to talk to myvm1.
Run docker-machine ls
to verify that myvm1
is now the active machine, as indicated by the asterisk next to it.
Now that you have configured your environment to access myvm1
, your swarm manager, you can easily deploy your application on the cluster
Now you can access your app from the IP address of either myvm1
or myvm2
.
The reason both IP addresses work is that nodes in a swarm participate in an ingress routing mesh. This ensures that a service deployed at a certain port within your swarm always has that port reserved to itself, no matter what node is actually running the container. Here’s a diagram of how a routing mesh for a service called my-web published at port 8080
on a three-node swarm would look:
You can tear down the stack with docker stack rm
. For example:
A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).
It’s easy to add services to our docker-compose.yml
file. First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers.
The only thing new here is the peer service to web, named visualizer
. Notice two new things here: a volumes key, giving the visualizer access to the host’s socket file for Docker, and a placement key, ensuring that this service only ever runs on a swarm manager – never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.
We talk more about placement constraints and volumes in a moment. Make sure your shell is configured to talk to myvm1
. Run docker-machine ls to list machines and make sure you are connected to myvm1
, as indicated by an asterisk next to it. If needed, re-run docker-machine env myvm1
. Now, re-run the docker stack deploy
command on the manager, and whatever services need updating are updated
Now you can take a look at the visualizer in your browser at port 8080
. Does the display match what you would expect?
Let's add Redis database for storing app data. Modify docker-compose.yml
Redis has an official image in the Docker library and has been granted the short image name of just redis
, so no username/repo
notation here. The Redis port, 6379
, has been pre-configured by Redis to be exposed from the container to the host, and here in our Compose file we expose it from the host to the world, so you can actually enter the IP for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you so choose.
Most importantly, there are a couple of things in the redis specification that make data persist between deployments of this stack:
redis
always runs on the manager, so it’s always using the same filesystem.redis
accesses a directory in the host’s file system that is linked to the directory /data
inside the container, which is where Redis stores data.Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis data. Without this, Redis would store its data in /data
inside the container’s filesystem, which would get wiped out if that container were ever redeployed.
This source of truth has two components:
./data
(on the host) as /data
(inside the Redis container). While containers come and go, the files stored on ./data
on the specified host persists, enabling continuity.You are ready to deploy your new Redis-using stack. Create a ./data
directory on the manager
Run docker stack deploy
one more time
And visit the web page that you serve on the manager in your browser.
You have completed two tutorials on virtualization. In the next classes, you are going stick with vagrant
to create virtual machines. If you decide to try using Docker for the next several lab sessions - you have this freedom, but you will be on your own (no technical support).
If your starting script fails with a message saying that it cannot find docker-machine.exe
or vboxmanage.exe
, open the script itself, i.e. C:\Program Files\Docker Toolbox\start.sh
and enter respective paths explicitly.
boot2docker.iso
imageDocket Toolbox installation comes with boot2docker.iso
. Locate it in C:\Program Files\Docker Toolbox\boot2docker.iso
and copy it to C:/Users/User/.docker/cache
docker-machine
does not see Virtual boxYou will see similar error
eval $(sudo docker-machine env myvm1)
does not do what I wantSolution: configure Docker to work without sudo
docker-machine
does not work without sudo
Some problems with Docker arise from the snap version of Docker application. If your version is installed in the form of snap container, remove it and install the latest version from the web site.
docker-machine
complains on lack of virtualization supportIf you run Docker on VM, make sure nested virtualization is enabled for your VM.
If you work on you host machine, make sure virtualization is enambed in BIOS.
Some versions of Windows do not support virtualization.
Error example:
docker-machine
does not work on macSometimes after starting the docker-machine
on mac you will get the message Killed
. This likely means that docker-machine
execution was blocked by your privacy settings. Go to Settings, Privacy and Security, General tab, and allow the evecution of docker-machine
.
Ready
stateVerify the integrity of the container that you have pushed into the repository
Useful resourse: Docker Documentation
docker exec -it container_id bash
docker build -t my_user/repo_name:1.0
docker commit -m "My first update" container_ID user_name/repository_name
docker push user_name/repository_name
docker ps
docker images
docker ps -a