Try   HackMD

Running the laboratory practices with Docker

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Docker container technology has revolutionized the way software applications are developed, deployed, and managed in modern computing environments. It provides a lightweight, portable, and efficient way to package an application and its dependencies into a single unit called a "container." These containers encapsulate everything needed to run an application, including the code, runtime, libraries, and system tools, ensuring consistency and reproducibility across different computing environments.

Docker's containerization technology offers several key benefits, such as improved resource utilization, faster application deployment, and simplified management. Containers are isolated from each other and from the host system, ensuring security and preventing conflicts between applications. Moreover, Docker containers can be easily moved between different environments, from development to testing to production, making it an essential tool for modern software development and DevOps practices. In this introduction, we will explore the core concepts, components, and advantages of Docker container technology, highlighting its importance in the rapidly evolving world of software development and deployment.

In the context of this laboratory, but in general of the software development activities, containers provide two great advantages. First, developing with containers provide environment consistency. Containers encapsulate the entire application and its dependencies, ensuring that developers work in consistent environments. What works on a developer's machine will work the same way in other environments, such as testing and production. And second, developers can work with multiple containers simultaneously, isolating different projects with different configurations, reducing conflicts and inconsistencies.

In this document I will guide you in the process of using docker in the laboratory to develop and run your practices.

Install Docker in your machine

Follow the instructions for your operating system and architecture in the following link:

https://docs.docker.com/engine/install/

Containers and images

When using container technologies it is important to know what are containers and what are images.

A container is a lightweight, standalone, and executable package that contains everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. Containers are isolated from each other and the host system, ensuring that an application runs consistently and reliably across different environments. It is like a isolated machine that runs a single process, which normally will be the compiler or the running practice itself.

An image, on the other hand, is a read-only template or blueprint used to create containers. It's a snapshot of a file system that contains the application code, libraries, dependencies, and configurations required to run an application. Docker images are typically built from a set of instructions in a Dockerfile, which specifies how to create the image. In particular for this case of use, we are interested in images already configured with everything that we need to develop our software. Images are versioned and can be shared and distributed through container registries like Docker Hub. Although it is possible to build our own custom images, in this case we will reuse a official release of the gcc compiler that we can find in Docker Hub, following this URL.

To download the image, you can do

$ docker pull gcc
Unable to find image 'gcc:latest' locally
latest: Pulling from library/gcc
796cc43785ac: Already exists
91290b4a0590: Pull complete
7d8d278f2f73: Pull complete
c1d99d3ae80a: Pull complete
5a0234245b3d: Pull complete
47dd1e4ceff1: Pull complete
32d1f8184ba0: Pull complete
67557f839e22: Pull complete
Digest: sha256:ad7f580ef21b6dcc1a70b33686c514edf5d624bb8c7b160460c6198e7f18cfc7
Status: Downloaded newer image for gcc:latest

You can also check the locally available images doing:

$ docker image ls

Running your build environment in a container

We can use a container, created from a image, to run any process, but in the context of this lab we will focus on running the following:

  • the gcc compiler and related tooling to build our programs
  • the bash just to inspect what's inside the container.

Please note that we are explicitly excluding tools related to editing the software itself, as well as version control tools like 'git.' This is because one of the advantages of this development architecture is that we can use the editors, IDEs, and other tools on the host machine, which is very convenient.

Running a containerized compiler

Let's use the following C code as an example to work with:

#include<stdio.h>

int main() {
    printf("Hello there. I will run in a container\n");
}

Please, use your favorite editor to write this code in a file call test.c

To run the compiler inside a container you can use the following instruction:

$ docker run -it --rm --name aso-lab -v $(pwd)/.:/local -w /local gcc gcc test.c -o test

For people running this in Windows, its better to use this syntax:

$ docker run -it --rm --name aso-lab -v .:/local -w /local gcc gcc test.c -o test

The provided syntax is a command used with the Docker command-line interface (CLI) to run a Docker container. Let's break down each part of the command:

  1. docker run: This is the basic command for running a Docker container.

  2. -it: These are two options combined:

    • -i: It stands for "interactive," allowing you to interact with the container's command-line interface.
    • -t: It stands for "TTY" or "pseudo-terminal," which allocates a terminal inside the container.

In particular, -it is not really needed to just compile, but it will be useful in the future.

  1. --rm: This option tells Docker to automatically remove the container when it exits. This is useful to clean up after the container is done running.

  2. --name aso-lab: This option assigns a name "aso-lab" to the running container. It helps in identifying and managing containers more easily.

  3. -v $(pwd)/.:/local: This is the volume mounting syntax. It maps the current directory on the host (obtained using $(pwd)) to the /local directory inside the container. This allows files in the current directory to be accessible from within the container.

  4. -w /local: This sets the working directory inside the container to /local. When you run subsequent commands inside the container, they will be executed in this directory.

  5. gcc: This is the name of the Docker image or container image you want to run. In this case, it's a Docker image that likely contains the GNU Compiler Collection (GCC) for compiling C programs.

  6. gcc test.c -o test: These are the commands that will be executed inside the running container. It's compiling a C source code file named "test.c" into an executable named "test."

So, when you run this command, it does the following:

  • Creates and starts a Docker container based on the "gcc" image.
  • The container is run interactively with a terminal.
  • The current directory on the host is mounted as a volume inside the container.
  • The working directory inside the container is set to /local.
  • It executes the gcc command to compile a C program named "test.c" and create an executable named "test" inside the container.

This is a common way to use Docker for development tasks, as it allows you to run development tools and compilers in isolated environments while keeping your source code and project files accessible and synchronized between the host and the container.

Create your own container image

Let's create a new image that contains also the vim editor.

  1. Create a Dockerfile with the following content:
FROM gcc:latest

WORKDIR /app/

RUN apt-get update
RUN apt-get install -y vim 

  1. Build a new image from the instructions in this Dockerfile
docker build . -t mi-soa-dev:latest
  1. Run the new built image

Go into production

#include<stdio.h>
#include <unistd.h>

int main() {
	int i;

    printf("Hello there. I will run in a container\n");
    printf("Now this is my final program ready for production\n");
	for(i=0; i<10; i++) {
    	printf("Working hard on task %i\n",i);
		sleep(i);
	}
    printf("Done!");
    printf("By Óscar®\n");
}

New Dockerfile for production. We will call this file Dockerfile.pro to differenciate it from the one we have just used for development.

FROM gcc:latest 

WORKDIR /app/

COPY test.c /app

RUN gcc test.c -o test

CMD ["/app/test"]

Now we build the new image. Notice that now we have to specify the name of the Dockerfile since it has a different name

$ docker build . -f Dockerfile.pro -t mi-soa-pro

Now we can execute the software directly inside the container by doing:

$ docker run -it --rm mi-soa-prod

Publish your image in Dockerhub

To publish your images in a container image repository, such as Docker Hub, it is necessary to follow the instruction given by the manufacturer. In particular for Docker Hub:

  1. Authenticate your docker executable with your credentials
$ docker login
# Login with your Docker ID ...
Username: opobla
Password:
  1. Tag again the image that you have built, but following this special format for Docker Hub: <username>/<repository-name>:<tag>
$ docker tag mi-soa-pro opobla/mi-soa-pro:latest
  1. And finally push your image to the repository
$ docker push opobla/mi-soa-pro:latest