I have found this book book online that explains the kubernetes ecosystem, and I will be reading it as my primary resource from now
Docker installation
apt-get install ca-certificates curl gnupg lsb-release
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
I also set up a Debian VM to go with the practical examples, which I have docker and k3d installed. To install k3d, I need to install kubectl
and then k3d itself
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
From my understanding so far, kubernetes is an orchestrator to run and deploy multiple docker (or not) containers and k3d and minikube are tools that acts wrappers around the k8s architecture.
Run k3d cluster create mycluster
to create a sample cluster and kubectl cluster-info
to verify the cluster creation
I wanted to set up the dashboard so I can see visually whats going on, so here are the installation steps
Deploy dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
get auth token to login to dashboard for the default user (remember the token)
kubectl create token default
Serve the api
kubectl proxy --address='0.0.0.0'
Should head to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/, it should appear. You will log in as admin-user and you should paste the token you have earlier to login.
AgroCD will be our gitOPs CD tool, install it with the commands below
To add agrocd to an exising cluster, one needs to create a namespace for it
kubectl create namespace argocd
then change the state of the cluster with the manifest from their repo, which will create service accounts, custom resources, services, deployments and roles alike
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
We will be using k3d to create the cluster and initial namespaces
We will now install argo3d in our cluster and launch it
Once that runs, you can head to localhost:8080 to access the argocd admin dashboard. This will be our gitOPS tool, so we will visit here often
To set the initial password for argocd, run the script
This will set your password to Passw0rd
. You will log in using this password with username admin
Create an app to deploy by creating a directory as the repos root, and create a build directory where your manifest files live.
in build.yaml
, you will create the manifest file for your pods / deployment
In your service.yaml
, you will create your service resource, which exposes your containers to entexternal clusters
Make sure to push these, victory-royale
is the application name, you can change to whichever name you see fit.
Use ArgoCD to track the app you created
Of course, change the repo link and the names as you see fit
You can use
to view the app that you created.
Run the following commands to build and deploy the app
Now try changing the app and pushing a change, the app ArgoCD should pick that up and redeploy the app automatically.
Create the gitlab namespace
kubectl create namespace gitlab
Set the current namespace to gitlab for convinience
kubectl config set-context --current --namespace=gitlab
Download and install gitlab to the cluster via the helm chart
helm repo add gitlab https://charts.gitlab.io
helm search repo -l gitlab/gitlab
Expose the gitlab frontend
Obtain the root password for login
https://stackoverflow.com/questions/42793382/exec-commands-on-kubernetes-pods-with-root-access
If you want to run a command in a k3d node, you should first understand that k3d nodes are essentially Docker containers. Therefore, you can use Docker commands to interact with these nodes, including running commands inside them.
Assign external ip for gitlab ingress service so the argocd server can conenct
kubectl patch svc gitlab-nginx-ingress-controller -p '{"spec":{"externalIPs":["10.43.2.138"]}}'
List out all the pods and look for the pod name for the argocd repo server (this pod is responsoble for the cloning action)
kubectl get pods -n argocd
Get the containerID of the said pod, we will use this ID later to manually change the hosts file
kubectl get pod $repoPodName -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's,.*//,,'
Change /etc/hosts in argocd repo container
docker exec -it k3d-mycluster-server-0 /bin/sh
runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 $containerID sh
echo "10.43.2.138 gitlab.localhost" >> /etc/hosts
Register gitlab repo to argo CD
argocd repo add https://gitlab.localhost/root/iot.git --insecure-skip-server-verification
If done right, there should be no errors.
Since these changes are Manual, this needs to be done again if the pod restarts.
Create namespace for the app
kubectl create namespace dev2
Create the app object in ArgoCD
Sync the app
argocd app sync victory-royale2
Install non-graphical debian-bullseye (11.6)
On windows, type the command bcdedit /set hypervisorlaunchtype off
in an admin powershell to disable the hosts hypervisor, so it wont clash with the VMs one.
To enable it, type the command bcdedit /set hypervisorlaunchtype auto
in the same shell.
The machine needs to be restarted to apply the changes. Virtualization dependant applications like docker and WSL will cease to function if this is done correctly.
Below is how to setup a very basic machine on vagrant named "Server".
Setting IP
This sets the private network en1 to whatever is specified.
This machine can be connected from host or from another machine within the same vagrant network using the same "server_ip".
Each individual machine can have its own provider specific settings, or they can all share one shared setting
These settings are specific to the provider so check the provider documentation page
After setting up the private network in the Vagrantfile and starting up vagrant with vagrant up
.
There are a few ways to connect to vagrant
Method 1
This method involves using the vagrant-ssh settings as a ssh config
OR
You can add it directly to your user ssh config
Method 2
Connect via vagrant's inbuilt ssh feature
You can achieve by generating a ssh-key and passing the public key to the authorized_keys for each machine
or it can automated in the Vagrant file script
This has to be done for each machine
k3s has to be installed on each machine once.
Both server and agent have slightly different installation methods.
- A server node is defined as a host running the
k3s server
command, with control-plane and datastore components managed by K3s.- An agent node is defined as a host running the
k3s agent
command, without any datastore or control-plane components.
More info about design architecture
--node-external-ip=value
IPv4/IPv6 external IP addresses to advertise for node
--advertise-address=value
Port that api-server uses to advertise to members of the cluster
This sets the default IP to use when agents try to connect
--node-ip=value
IPv4/IPv6 addresses to advertise for node
This sets the internal server IP for the node when kubectl get nodes -o wide is called
--write-kubeconfig-mode=value
Write kubeconfig with this mode specified
Mode needs to be set to 644
if some configs are specified
--agent-token=value
Shared secret used to join agents to the cluster, but not servers
Custom tokens can be set with this for easier testing
If token value is not specified a token will be generated automatically in the file
/var/lib/rancher/k3s/server/node-token
in the server host machine
--server=value
Server to connect to
If --node-ip
or --advertise-value
is used in server setup use the value specified in there
--token=value
Token to use for authentication
--node-ip=value
IP address to advertise for node
This sets the internal server IP for the node when kubectl get nodes -o wide is called
If token is automatically generated it is possible to ssh / scp to copy the file in
/var/lib/rancher/k3s/server/node-token
of the servers host machine to acquire the token
The part vagrant setup is similar to part 1, Create a vagrantfile with the contents
And change the server startup script to
The current configuration works on the app1 and app3 hosts, however for app2s replica set, there is not garentee that the traffic is distributed to each replica. If the traffic is not distributed, the purpose of having a replicaSet is not justified.
To make the changes, we will need to change ingress.yaml
to the following
We also need to change services.yaml
to the following
To initiate the balancing feature, we will first and foremost change the service type for app2 to a load balancer. This spawns a LoadBalancer Service in the node level that listens to port 80 and 443 by default. It will try to find a free host in the cluster for port 80. If no host with that port is available, the service remains pending.
You know what else listens to port 80 on the node level? K3s built in traefik controller. This will create a port conflict with the load balancer service, hence some changes are needed.
We can't change the traefik controller listen port, however we are able to change the port the loadBalancer service is listening on.
So our first change will be on the loadBalancer service to listen to port 81 instead. We can now change the traefil controller to forward whatever coming from port 80 (default) to the loadBalancer at port 81 for .
The load balacer will then distribute the traffic to the pods in the app2 replicaSet through port 80.
For app1 and app3 however, there will be no load balancers, so it can be remain inchanged. However for the sake of readability, we will change the service lister and the traefik rule to port 81 as well.
Since the loadBalancer service listens to the port at node level, means that it is also exposed outside the cluster. If you make any request on port 81, you will hit the load balancer and you will get served app2.