written by:
modified by:
In order to get better understanding about how Kubernetes work, here are basic concept of terms that will be mentioned several times in this documentation.
Docker is a open-source containerization platform that allows developers to package applications and their dependencies into lightweight, portable containers that can be easily deployed across different environments, such as development, testing, and production.
Docker provides a simple and consistent way to build, distribute, and run applications, making it easier to manage and deploy applications across multiple environments. It also provides tools for managing images (the base building block of containers), networking, and storage.
Kubernetes is an open-source platform developed by Google and now maintained by CNCF. It automates deployment, scaling, and management of containerized applications across multiple hosts. It abstracts the underlying infrastructure and provides tools for developers to focus on writing code and operations teams to manage the infrastructure and applications in a consistent and reliable way.
To learn more about Kubernetes, please refer to this documentation.
Containerd is an open-source container runtime designed to provide a low-level interface for managing containers on a host. It is lightweight and portable, making it useful for various deployment scenarios. Containerd offers a consistent API for managing container images, containers, and their associated resources. One of its key features is its support for the Open Container Initiative (OCI) runtime and image specification, which ensures standardized container image building, sharing, and running across different container runtimes and platforms.
CNI plugins are responsible for setting up the network namespace, configuring network interfaces, and routing traffic between containers and the external network. The CNI specification allows for multiple networking plugins to be used in a Kubernetes cluster, giving users the flexibility to choose the networking solution that best suits their needs.
When a container is created in Kubernetes, the Kubernetes networking system invokes the CNI plugin to set up the networking for the container. The CNI plugin can be used to configure different types of network setups, such as overlay networks or network bridges, depending on the requirements of the containerized application.
This documentation covers the Docker installation on Ubuntu 20.04. In addition, there are also some Kubernetes installation steps and the error I encounter during the process.
Note: This documentation mostly refers to this link with a little bit of modification for errors handling
Sometimes the server time sets on Ubuntu do not match our local time. It is important to synchronize it so that we can access secured websites we will use for installation.
We start by run administration as root
Update all packages to the latest version
Optional: Although we run the administration as root, it is useful to still included sudo on most of the commands we run.
Do a safe upgrade
Check if swap is enabled
If there is no output shown, try to run this command:
Disable it with
Then remove the existing swapfile
Swap is a part of hard drive storage to temporarily store data that can no longer hold in RAM. This process will be frutiful for Cgroup driver configuration on later step.
Remove the line below from /etc/fstab
Install the Ubuntu ntp package
Note: You can also do the installation one-by-one to ensure all the packages are established. NTP package will be used to adjust server's time with our local timezone.
Restart the ntp daemon
Check the status of the ntp daemon with
Make sure time is matched with your local time
If time is not updated to the current time, DO NOT continue and update/change time manually!
If you are using Wireguard as your VPN, the proxy's IP and port are shown below
Add docker's official GPG key
Add docker's repository
Update the package cache
To install a specific version of Docker Engine, start by list the available versions in the repository
Select the desired version and install version 19.03.15
You can also choose the newer version as you wish
Fix the package version so that a distribution upgrade leaves the package at the correct version
Load two modules in the current running environment and configure them to load on boot
Add modules to config
Configure required sysctl to persist across system reboots
Apply sysctl parameters without reboot to current running enviroment
Install containerd packages
Create a containerd configuration file
Edit the configuration at the end of this section in /etc/containerd/config.toml
Around line 112, change the value for SystemCgroup from false to true.
Restart containerd with the new configuration
If you are behind a proxy, set the proxies:
Proxy configurations in this documentation are all optional, you can adjust it if they are asked in further step (see part V, step 7)
Make sure that the following IP address (or ranges) are part of the NO_PROXY list:
Add the kubernetes repo:
If you get output as below, you can go on.
Add version to kubernetes list
Update the apt repository:
List all version of kubernetes
Install kubernetes command line tools version 1.23.12
Avoid that a distribution upgrade also upgrades the command line tools
The images can now be pulled
After a while, you should see in the logging the following line, indicating that the above command was successful:
Do not continue if you don't see this message!
If you encounter error as below
you need to reset all first and remove the existing folders
then you can execute the kubeadm init again
Alternatively, if you are the root user, you can run:
When you are behind proxy, its possible that you will facing issue such as below:
this due to calico will propagate proxy including pod CIDR. So to solve the issue need to follow these steps:
in above case using 10.0.0.0/8 since we are using 10.244.0.0/16 when doing kubeadm init. Change according to your CIDR.
If your pod does not get scheduled, probably your controller has taint on it prevent them to schedule pods. Lets untaint it.
You can run kubectl get nodes to find your node name
To join the cluster, a node should already have kubelet, kubeadm, kubectl and container runtime onboard. If it doesn't you can repeat the steps from Configuration-and-Additional-Packages
up until Install Kubernetes Components section. Then continue steps below:
If node status still NotReady after a while it means something is wrong with the node.
To solve this do these steps:
1. Drain node
kubectl drain node <node_name>
2. Delete node
kubectl delete node <node_name> --ignore-daemonsets --delete-local-data
3. Go to worker node, do uninstall kubernetes and then clean all existing directories and files
4. Reboot worker node
5. Do the kubeadm join again
The installation of helm, the package manager for kubernetes is quite straightforward:
Note: if you already have helm charts, these need to removed first.
This notes provide errors that I encountered during the installation process with the solution.
When I tried to add Kubernetes repo as below
It took a long time until an error notification came up like this.
I suspect this is due to the proxy setting mentioned on the former step. I have tried other combinations (i.e. deleted the Flannel CNI IP address, diactivate and reactivate the tunnel VPN and change the listening port). However, the result came up with the same error.
Solution: Solution was provided by Fransiscus Bimo.
I tried to create Kubernetes cluster by using the following command (Part IV, step 1)
However, the output only shows error messages as below
I have tried to remove the existing folder and restart it using this command, but the output still came out with the same error message.
I also have re-checked the proxy IPs that I have adjusted earlier in (part II, step 1) and (part III, step 1)
It has already been added as mentioned. Nevertheless, it still shows the same error message.
Solution: Remove all the proxies configuration. Try to run the command again, and now the warning notification has disappeared, but the error still occurs. It turns out I have not configured the Cgroup driver. So, by setting the container runtime and kubelet to the appropriate system, we can reset and then restart the process as the following commands:
Now the Kubernetes cluster has been successfully created.