```
Author : Jayaraj Peediyakal Sivadasan
```
# Docker
## Pre-requisites
### Pre Req: Very basic "Must Know" Linux for learning Docker
Linux is open source. There are different flavors of linux known as distros.
Popular Distros: Ubuntu, Debian, Alpine , Fedora, CentOS to mention a few.
Get & Run Ubuntu docker image: `docker run ubuntu`
View Running containers `docker ps `
View containers stopped `docker ps -a`
Use the ubuntu container to learn linux commands by running container interactively: `docker run -it ubuntu`
/ reprsents a root directory in linux.
### A few linux commands
```
echo # echos an argument to the console.
whoami # shows current user
echo $0 # shows the location of the shell program
# Linux is case sensitive.
history # shows the list of comamnds that are executed. we can use !followed by the ordinal of the command in history to rexecute it. Example: !3
```
### Ubuntu package manager `apt`
List available packages in package database `apt list`
Update package database `apt update`
Install a package
`apt install nano`
`apt install python`
Uninstall a package
`apt remove nano`
`apt remove python`
### Linux Directory Structure
- `/` #root directory. Top on the hierarchy.
- `/bin ` #binraries
- `/boot` #files related to booting.
- `/dev ` #devices. In linux everything is a file(Even directories)
- `/etc ` #editable text configuration files.
- `/home` #home directory. Each user will haev a home directory.
- `/root` #home directory of the root user (Admin)
- `/lib ` #software library dependancies
- `/var ` #files that are updated frequently. (Log files, application data etc)
- `/proc` #running processes.
### Some Basic Commands
`pwd ` #Print working directory. Print's current directory
`ls ` #print the list of directories
`ls -1` #print directory names one per line. (ls -2 is not an option)
`ls -l` #Long list . Permissions, owner, date etc.
`cd ~ ` #Change in to home directory.
`touch <filename>` # create a new empty file
`touch <filename1> <filename2> <filename2>` #create multiple empty files.
`more <filename>` # to view a large file in multiple pages
`less <filename>` # to view a large files with scrolling ability. Replaces more
`head -n 5 <filename>` # to show first 5 lines
`tail -n 5 <filename> ` # to show last 5 lines
`cat file1.txt file2.txt` # concatenate contents of two or more files.
`grep `#*Global Regular Expression Print*. Searches a word in a file
`grep hello file1.txt` #search for the word hello in the file1.txt
`grep -i hello file1.txt` #ignore case and search
Example: `grep root /etc/passwd`
`grep -i -r hello .`
`grep -ir hello .`
`find` # find files in a given directory recursively.
`find -type d` # find files in
`find -type f` # find files in a given dir
Example: Find all python files in root. `find / -type f -name "*.py"`
#### Command chaining (Awesome feature in linux)
`mkdir test; cd test; echo done`
`mkdir test && cd test && echo done` # usage of &&, fails subsequent commands if one fails.
`mkdir test || echo 'dir exists` # second will be executed only if the first fails.
Pipe `ls /bin | less` # pipe the output of the ls command to the less command
#### Environment Variables
List environment variables `printenv`
`printenv PATH` #show the contents of path variable
`echo $PATH` #same as above.
`export DB_USER=jay` #Create an environment variable in current session. (Volatile after session is closed)
For creating persistant environment variable
`cd ~ `
`nano .bashrc ` #to edit and addd environment variable
#to make the varibale available `source .bashrc`
BASH = *Bourne-Again SHell* newest version of shell program.
`ps` # to view the currently running processes.
PTS = Psuedo Terminal
pts/0 first terminal window. If you open another terminal you will see pts/1
`kill <processid>` #to force a process to quit.
#### Creating a user
`user add -m jay` # -m is for creating home directory.
The user information can be found in /etc/passwd as shown below:
`jay:x:1000:1000::/home/jay:/bin/sh`
```username:password stored elsewhere: userid:groupid::home directory:shell program used when the user logged in.```
#### Modifying hte User to use bash
`usermod -s /bin/bash jay`
The /etc/passwd will be modified as shown below:
`jay:x:1000:1000::/home/jay:/bin/bash`
Passwords are stored in `/etc/shadow`
Every user is automatically added to a group with the same name. This is called primary group and can be changed.
#### Create a new group
`groupadd <groupname>`
```
groupadd developers # Create group developers
cat /etc/group # View the groups list.Somethign like this: developers:x:1001:
usermod -G developers jay # add a user to the group
groups jay # list of groups
```
#### Permissions rwx - read / write / execute
3 sets of permissions
set 1: Permissions for the user who owns the file.
set 2: Permissions for the group owns the file.
set 3: Permission for everyone else
`chmod` command is used to change permissions.
###### tags: Docker
## Docker
Docker is a platform for creating and running containers that makes it easy to install and run software on any comptuer without worrying about installation and dependeincies..
A container is an instance of an image. Where as an image is a snapshot of the dependencies and configurations to run a specific program.
There are two parts to Docker. teh CLI (Or Client) and the Daemon (Or Server)
#### Hypervisors:
Virtual Box
VMWare
Hyper-v (Windows)
### Container:
- Provides isolaed environment. Each running containers are isolated from each other.
- Can be stopped and started.
- special process (Own file system etc)
- Each container is an isolated environment (Even if it is created from same image)
- Containers are lightweight compared to VMs. No need to manage licenses.
### Basic Commands
`docker ps` -- List all containers
`docker ps -all` -- List all containers, even those exited.
`docker create` -- Create a container
`docker start` -- Run the container
`docker start -a` -- Run the container and attach the output to the console
`docker run ` -- This has the effect of docker start + docker start -a
`docker system prune` -- Delete all containers, images and build cache that are no longer used.
`docker container prune` -- Delete unused / non-running containers.
`docker logs` -- Getting the output generated by the container even after it is stopped.
`docker stop` -- Sends a `SIGTERM` message to shutdown the process . Take its time to shutdown to cleanup. (After 10 seconds it will kill the container)
`docker kill` -- Sends a `SIGKILL` shutdown right now. No cleanup or additional work.
`docker exec` -- Execute an additional command inside a container.
- Example: Execute Redis CLI`docker exec -it 15668c745ed2 redis-cli`
- Example: Execute Shell `docker exec -it 15668c745ed2 sh`
- NOTE:
- Docker containers are running inside a linux environment even if you are on Windows or Mac . (Try docker version command and you can see the details of the underlying linux VM)
- Every linux process has three communication channels STDIN, STDOUT and STDERR
- the `-it` flag is a combination of -i and -t. -i tells the linux environment to attach STDIN to our terminal. -t makes it more cleaner interface.
### Steps involved in `docker run`
- docker client sends a request to the daemon
- the daemon looks for the image in the image cache
- if not found then it goes to the registry and pulls down image locally
- create the container (Instance of the image)
- run the startup command.
## Dockerization Workflow
- Add a Dockerfile to the project.
- A Dockerfile is a set of instructions to package the application to an image.
- (A Cut-down OS, Runtime environments, app files, libraries, env variables
- The Docker client provide the file to the Daemon
- Daemon takes the dockerfile and build a usable image
- Use the image to start the container. (Use docker run..)
- Push the image to docker registry. (Quay/ Docker hub)
- Put the image on any machine (Dev/Test/Prod). *This helps resolving environment disparities*
## Dockerfile
- Specifiy a base
- Run some commands to install additional programs/libraries
- Specify a command to run on container startup.
Example:
```
# Use existing docker image as a base
FROM alpine
# Dowload and install dependency
RUN apk add --update redis
# Tell the image what to do when the container starts.
CMD [ "redis-server" ]
```
`FROM =>` Specifies the base Image
`RUN =>` Specifies the Linux command to be run after getting the base image.
`CMD =>` Specifies the command to be run at startup.
## Docker Build
- Send build context to the daemon (Build context is the set of files in the specified path)
- Looks at local build cache for base image.
- If Not , reach the registry and download the image.
- Create a temporary container
- Run the specified command inside the temporary container.
- Take a snapshot of the container and create a temporary image (*This is cached)*.
- Remove teh temporary container.
- Create a temporary container from the temporary image created above.
- Set the primary command (Will not run the command)
- Remove the intermediate container
- Takes a snapshot of the file system and create the final image.
NOTE: Each time you run the build, docker will try to use the cached images to speed up the work flow.
### Dockerizing a small nodejs app
Steps
- Create a nodejs app
- Add a docker file
- Build image
- Run the container and map the port of the container to a local port
```
# Specify the base image
FROM node:current-alpine3.18
# Specify a work directory so that files dont get copied over to root
WORKDIR /usr/node-app
# Copy the contents of the build context to the WORKDIR created in the above step
COPY ./ ./
# Install npm packages
RUN npm install
# Start the app
CMD [ "npm","start"]
```
`docker run -p 5000:8080 12f35508d2`
Access the app from `localhost:5000`
### Dockerizing a react app
Steps
- install node
- npm install
- npm start
Add a docker file.
```
FROM <base image>
WORKDIR -- Specifiy working dir
COPY
ADD
RUN
ENV # Environment variable
EXPOSE #
USER #
CMD #
ENTRYPOINT
```
NOTE: Windows images can be huge
Once the docker file is defined , use docker build command to build it.
`docker build -t react-app . `
#the . represents current dir
#-t represents the tag name
Once build is succesful, check if the image is created by issuing the `docker images` or `docker image ls` command.
Start the container interactively
`docker run -it react-app`
Interactively start an interactive shell session inside the container
`docker run -it react-app sh`
(Note that alpine image does not ship with bash. So we have to use default shell)
to exclude files and directories , create a ``.dockerignore` file
### Optimizing Build
Docker uses layers. Image is a collection of layers.
Each instruction creates a lyaer.
`docker history react-app` produces an output similar to the one below. Look at hte size of the npm install step.
```
IMAGE CREATED CREATED BY SIZE COMMENT
8fdad5540f35 8 minutes ago /bin/sh -c npm install 247MB
b8a3f2f31179 9 minutes ago /bin/sh -c #(nop) COPY dir:2f1e3ccdd48b26047… 2.18MB
c0980f7cbedf 4 hours ago /bin/sh -c #(nop) WORKDIR /app 0B
15f5ebb46aae 3 weeks ago /bin/sh -c #(nop) CMD ["node"] 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ENTRYPOINT ["docker-entry… 0B
<missing> 3 weeks ago /bin/sh -c #(nop) COPY file:238737301d473041… 116B
<missing> 3 weeks ago /bin/sh -c apk add --no-cache --virtual .bui… 7.84MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV YARN_VERSION=1.22.5 0B
<missing> 3 weeks ago /bin/sh -c addgroup -g 1000 node && addu… 98.8MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV NODE_VERSION=15.14.0 0B
<missing> 3 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 3 weeks ago /bin/sh -c #(nop) ADD file:8ec69d882e7f29f06… 5.61MB
```
Reuse from cache.
Example dockerfile
```
# Get the base alpine image with node.
FROM node:15.14.0-alpine3.13
#create a user and group for the app
UN addgroup -S appgrp && adduser -S appusr -G appgrp
# # set the user so that all the rest of the process/commands will be executed under this user.
USER appusr
# Set the working directory. all instructions following WORKDIR will be executed inside workdir
WORKDIR /app
# Copy the package.json and packagelock.json file to cache this layer.
COPY package*.json .
# execute npm install . This helps to reuse cache if there is no change in package*.json files.
npm install
# Copy the contents.
COPY . .
# # set environment variables if any
ENV API_URL=http://google.com/api
#expose ports
EXPOSE 3000
CMD ["npm","start"]
```
To remove unwanted images (Dangling images) `docker image prune`
To remove dangling containers `docker container prune`
TO remove a regular image
`docker rm <imagename>`
`docker rm <image_id>`
### Tagging
Do not tag `$LATEST` for production
Tag while building
`docker build -t react-app:1.0.0`
**Publish to a docker registry**(Such as quay, dockerhub, mcr, ecr etc)
Example, Logon to dockehub.com and create a repo.
Tag the image locally: `docker image tag d299 awsjayaraj/react-app:2.0.0
`
Use `docker login` to logon to the account from terminal and enter username and password. You should be getting `Login Succeeded` message
Push the container image to the registry:
`docker push awsjayaraj/react-app:2.0.0`
It may take a while to push all the layers of the image to the repository. Go back to dockerhub.com to verify that the image has been published.

#### Saving an image to compressed file
`docker image save -o react-app.tar awsjayaraj/react-app:2.0.0`
Unzipping the tar file will show the list of layers contained in the image.
To load the image on another machine :
`docker image load -i react-app.tar `
To view running containers
`docker ps`
To run a container:
`
docker run react-app:1.0.0
`
To run the container in backgroun (Detached mode)
`docker run -d react-app:1.0.0`
To run a container in background and give it an identifier
`docker run -d --name horacious-bird react-app:1.0.0`
To view the logs from a detached container:
docker logs <container-id>
`docker logs 5e8cc40aea5b`
To view last n lines (5 in this case)
`docker logs -n 5 5e8cc40aea5b`
#### Port Mapping
The container uses its own port and is not mapped to the localhost. To enable this :-
`docker run -d --name angry-bird -p 80:3000 react-app:1.0.0`
Now we have mapped port 80 of the current host to 3000 of the container and should be able to access the react app using http://localhost.
#### Executing Commands against running containers
`docker exec angry-bird ls
` # This will display the contents of the /app directory.
Interacting with a running container:
`docker exec -it angry-bird sh
`
`exit ` after done.(WOnt impact the container. Only session will be terminated)
#### Stopping a container
`docker stop angry-bird`
#### Starting a stopped container:
`docker start angry-bird`
#### Removing a container
`docker rm angry-bird --force
`
#### View stopped containers
`docker ps -a`
#### Container File System
Each container has its own file system. The storage is empheral. So never store data in container file system
#### Persist Data using Volume
Volume is storage outside container
```
# Create a volume
docker volume create app-data
docker volume inspect app-data
docker run -d -p 4000:3000 -v app-data:/app/data react-app:1.0.0
```
In the above example docker autoamtically creates the volumes. This may cause access issues if the processes container is running with non root permissions. To avoid this, add an `mkdir` step thte Dockerfile.
Since the volume is a file on host it will still exist even after the container is deleted.
This also helps sharing volumes amongst multiple containers.
We can use the same technique to immediately reflect code changes in the development environment without rebuidilng the image. For this purpose, when we run the container map to the current directory on the host to the /app directory on the container.
`docker run -d -p 80:3000 -v $(pwd):/app react-app:1.0.0`
Example: `docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app 1b63a02260f3`
**Copy File from container to host**
Example below copis a file from container to current directory on the host
`docker cp <containerid>:/app/<filename> .`
#vice versa
`docker cp <filename> <containerid>:/app/`
#### Removing docker containers and images images
#list container and image ids
`docker container ls -qa`#a includes stopped container as well
`docker image ls -q`
pass the list to docker rm comamnd.
`docker container rm --force $(docker container ls -aq)`
`docker image rm --force $(docker image ls -q)`
## Docker Compose
Helps in creating multiple containers at a time.
A compose file will have the following mandatory sections:
```
version : "3.8" # you can find this from compose documentation.
services: # here you will ist your services
<serivicename>: # any arbitrary name.
build: ./<foldername> #Each service will have its on docker file
ports: # hostport : container port.
environment:
<environmentvariablename>:<variablevalue>
volumes:
- <volumename>:<containerdirectory>
<servicename2>:
image: # specify an image name instead of building an image.
volumes: # just repeat the volume names here without value section.
<volumename>:
```
Example docker-compose.yml is given below.
```
version: "3.8"
services:
ui:
depends_on:
- api
build: ./frontend
ports:
- 3000:3000
api:
depends_on:
- db
build: ./backend
ports:
- 3001:3001
environment:
DB_URL: mongodb://db/app
command: ./docker-entrypoint.sh
db:
image: mongo:4.0-xenial
ports:
- 27017:27017
volumes:
- app:/data/db
volumes:
app:
```
Docker compose is built on top of docker engine. All operations are available on docker ocmpose as well.
`docker-compose ` # shows all possible options for docker-compose
`docker-compose build` # builds images. The image names will be prefixed with the dir name.
`docker-compose build --no-cache` #forced rebuild without using cached layers.
`docker-compose up ` # run the containers.
`docker-compose up -d ` # run the containers in detached mode.
`docker-compose up --build` # build and run
`docker-compose ps` # show only containers related to the given compose file.
`docker-compose down` #stop running containers and remove them.
`docker-compose logs` #View logs of all running containers.
## Docker Networking
`docker network ls` #
Docker comes with embedded DNS Server that contains name and ip of the containers.
Inside each container there is a DNS resolver that communictes with the DNS server to find the ip address of the other.
`
## Deployment
### Deployment Options
- Single Host
- Cluster (Requires Orchestration tools like : Docker Swarm OR Kubernetes)
Run the container in AWS ECS or EKS