# SOFTWARE ENGINEERING AT BLOCKFUSE LABS : WEEK 18 If you have ever wondered about virtualization or tried to virtualize and build an application on your local machine, you will come across issues concerning consumption of system resources and dependencies: The popular "It works on your machine but it does not work on mine". The use of containerization technology solves the problem of dependencies using Docker, far more efficiently than virtualization of your local/host machine. **WEEK 18 RECAP TOPICS/COURSES** **DOCKER Why docker?** We first need to understand why we need docker before getting to know more about it. Docker is a technology for containerization of applicattions for software development, testing and deployment. It has fast become the industry standard of developing and deploying apps through the use of containers. **Why use containers?** Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data centre, the public cloud, or even a developer’s personal laptop. This gives developers the ability to create predictable environments that are isolated from the rest of the applications and can be run anywhere. Another reason is that containers take a different approach by leveraging the low-level mechanics of the host operating system, containers provide most of the isolation of virtual machines at a fraction of the computing power. Basically, containers consume far-less system resources than virtual machines. **DOCKER CONCEPTS** **Docker Image:** Think of it like a recipe or blueprint. It's a file that has everything needed to run an app - the code, settings, and tools. It's just sitting there, not doing anything yet. Images are simply the blueprints of our application which form the basis of containers. **Docker Container:** This is when you actually use the recipe. It's the running version of the image - your app is alive and working. You can have many containers from one image, like making multiple cakes from the same recipe. **Docker Daemon:** The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operating system which clients talk to. **Docker Client:** The command line tool that allows the user to interact with the daemon. More generally, there can be other forms of clients too - such as Kitematic which provide a GUI to the users. **Docker Hub:** A registry of Docker images. You can think of the registry as a directory of all available Docker images. If required, one can host their own Docker registries and can use them for pulling images. **Dockerfile:** A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own dockerfiles. **Docker Installation & Setup:** For better experience and preference, docker can be downloaded and used with a GUI (Docker Desktop) or with the CLI (Docker Engine) all depending on developer preferences. You can download docker at the docker site at[ https://docs.docker.com/engine/install/](https://) **CONCLUSION** Containerization has become to the industry standard to build, ship and deploy software applications and docker is one of the best tools that makes that easy. The above information are just little key bits about docker, docker is an ocean of its own and it is worth every study.