---
tags: RP1
---
:::success
# Research Project 1
* **Mikhail Syropyatov**
* **Rail Iaushev**
:::
## Table of content
[TOC]
## Main goals
1) Design Architecture for Logging in Kubernetes.
2) Integration and deployment of fastAPI microservices in docker containers.
3) Configuring CNI.
4) Deployment Layerd Logging in Kubernetes.
5) Configuring Operations Service Level Agreement
6) Detecting anomalies activities/attacks in the Kubernetes logs
7) Creating an event real time monitoring app with logger tracking which will send Telegram notifications if there is a problem.
## 1. Introduction
The goal of our research project is to understand the operation of the Docker open platform for containerization of applications, as well as understand the operation of the Kubernetes platform designed specifically for managing containerized workloads and services.
The second and main task is to set up logging in the Kubernetis system. To do this, we will create 2 microservices, put them in containers using Docker, and then set up logging inside Kubernetes using Prometheus and Telegram bot for notifications. And trying to find a way to secure our applications with some tools.
Plan to achieve the goals:
1. Create source code for microservices.
2. Place microservices in Docker containers.
3. Create a local cluster using Minicube.
4. Set up CNI within the cluster.
5. Add the ability to monitor the cluster.
6. Add alert manager for telegram.
7. Explore the possibility of protecting the cluster.
## 2. Methodology
### 2.1 Architecture of project system
Figure 4 shows the architecture that we want to achieve when working on a project.
<center>

Figure 4: Architecture of our project
</center>
### 2.2 Technologies stack:
### 2.2.1. Docker
Docker is an open-source platform for developing, delivering and operating applications. Docker is designed to get an applications up and running faster. We used docker for creating containers and place them into pods.
### 2.2.2. Kubernetes
Kubernetes is an open source project designed to manage a cluster of Linux containers as a single system. Kubernetes manages and runs Docker containers on a large number of hosts, and provides co-hosting and replication of a large number of containers.
The project has two goals. When using Docker containers, the question arises of how to scale and run containers on a large number of Docker hosts at once, as well as how to balance them. The project offers a high-level API that defines a logical grouping of containers, allowing you to define container pools, load balance, and set their placement.
### 2.2.3 Minikube
To manage the cluster, we will use minikube, a tool that allows you to run a Kubernetes cluster consisting of a single node. This tool will allow us to run the cluster locally.
### 2.2.4 Prometheus
Prometheus is a monitoring system. Its main advantages are the ability to create flexible queries to data and store metric values in a time series database, the possibility of automation during administration. We using it for monitoring.
### 2.2.5 FastApi
FastAPI - API web framework written in Python. One of the fastest and most popular web frameworks. We used it to create web applications inside containers.
## 3. Implementation
### 3.1 Project links
Here are links for sources codes that we created for our project:
1. <a href="https://hub.docker.com/repositories/syr94">Docker hub with our images</a>
2. <a href="https://github.com/syr94/Innopolis_RP_1">Services sources and configs</a>
3. <a href="https://github.com/rail-yaushev/AlertBot">Alert bot source</a>
### 3.2 Installing and configuring
#### 3.2.1 Kubernetes
For configuring Kubernetes, we need to download last version from official site, file with checksum. And check downloaded file. After this just transfer it to directories with executable binaries. And it will work.
downloading file:
```curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"```
dowloading file with checksum:
```curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" ```
```echo "$(cat kubectl.sha256) kubectl" | sha256sum --check```
result:
```kubectl: OK ```
installation:
```sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl```
check version:
```kubectl version --client```
#### 3.2.2 Minikube
```curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/```
minikube start:
```minikube start```
```https://kubernetes.io/ru/docs/tasks/tools/install-kubectl/```
#### 3.2.3 Docker
```docker build . -t project_api```
```docker build . -t project_auth```
```docker tag project_api:latest syr94/sne_project_api:latest```
```docker tag project_api:latest syr94/sne_project_auth:latest```
```docker push syr94/sne_project_api:latest```
```docker push syr94/sne_project_auth:latest```
```docker push syr94/sne_project_auth:latest```
#### 3.2.4 Configuring Minikube
```minikube config set driver docker```
```minikube delete```
```minikube start```

```minikube start --feature-gates=EphemeralContainers=true```
```minikube start --extra-config=apiserver.v=10 --extra-config=kubelet.max-pods=5```

```kubectl run hello --image=syr94/sne_project_api:latest --port=6080```


### 3.3 Configuring CNI
**CNI** - Container Networking Interface. Network interface and standard for Linux containers. It was automatically configured when we start our pods.
#### 3.3.1 API pod
```
Name: api
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.49.2
Start Time: Sat, 21 Jan 2023 23:13:02 +0500
Labels: run=api
Annotations: <none>
Status: Running
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
```
#### 3.3.2 AUTH pod
```
Name: auth
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.49.2
Start Time: Sat, 21 Jan 2023 23:11:52 +0500
Labels: run=auth
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
```
### 3.4 Configuring docker containers inside pods
```kubectl exec -it api sh```

```kubectl port-forward api 6080:6080```
checking connection:

Inside pod, there was problem, because we tryin to connect via browser Google Chrome, for some reason it always trying to use **https**, and our server uses **http**:

response from pod **API**:

### 3.5 Logging in Kubectl
Except specific ways for logging, kubetcl has it own
```kubectl logs "POD_NAME"```
```kubectl logs api```

or we can check all information about a pod:
```kubectl describe pod "POD_NAME"```
```
Name: api
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.49.2
Start Time: Thu, 19 Jan 2023 22:49:05 +0500
Labels: run=api
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Containers:
api:
Container ID: docker://aa6d51d9f35793c6091134ab54774f918e0aec05986282cd23c7ca82a5b257fc
Image: syr94/sne_project_api
Image ID: docker-pullable://syr94/sne_project_api@sha256:c6a561f0eb888be84839643c3272c66318d99b7cdc4a908331f59e2ee07356e7
Port: 9997/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 19 Jan 2023 23:10:42 +0500
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Jan 2023 23:05:27 +0500
Finished: Thu, 19 Jan 2023 23:05:27 +0500
Ready: True
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wn5z (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-6wn5z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
```
So it is has full information about pod for this moment. And a field **Events**, that describe all last events with a pod. We can check **Errors** of a pod here.
### 3.6 Security in kubernetes (for detecting anomalies activities)
For security we are using **Cortex** from this <a href="https://hub.docker.com/r/ubuntu/cortex">link</a>
Starting cortex docker container:
```docker run -d --name cortex-container -e TZ=UTC -p 32709:9009 ubuntu/cortex:1.11-22.04_beta```
```docker logs -f cortex-container```


To apply config files(In a directory):
```
kubectl create configmap cortex-config --from-file=main-config=cortex.yaml
kubectl apply -f cortex-deployment.yml
```
### 3.7 Monitoring
We wanted to use zabbix and Prometheus, but there was a lot problems with prometheus for some reason(It descrived in **Problems** part).
**Zabbix** - free system for monitoring and tracking the status of various computer network services, servers and network equipment
**Prometheus** - monitoring system designed specifically for a dynamically changing environment.
For installing this we used **Helm**. **Helm** - Helm is a package manager for Kubernetes that configures and deploys applications and services on a Kubernetes cluster. It uses Helm charts to simplify the development and deployment process.

#### 3.7.1 Installing helm:
```
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes```
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
```
#### 3.7.2 Zabbix installation:
```
helm repo add zabbix-chart-6.0 https://cdn.zabbix.com/zabbix/integrations/kubernetes-helm/6.0
helm show values zabbix-chart-6.0/zabbix-helm-chrt > $HOME/zabbix_values.yaml
kubectl create namespace monitoring
helm install zabbix zabbix-chart-6.0/zabbix-helm-chrt --dependency-update -f $HOME/zabbix_values.yaml -n monitoring
```
#### 3.7.3 Intalling Prometetheus
Standart method doesnt work. So:
```helm repo add my-repo https://charts.bitnami.com/bitnami```
```helm install my-release my-repo/kube-prometheus```
### 3.8 Alert bot deployment (real time monitoring app)
Given that the monitoring and logging tools we use do not meet our needs for good, timely, and most importantly convenient user notification of critical errors, we will add A simple Telegram Bot to translate POST requests with JSON payload into Telegram push messages to the project, which will accept data from the alert manager, which is part of the Prometheus Operator infrastructure.
Why is it useful?
Suppose that we have a network company with a huge infrastructure, then it has people responsible either for supporting this infrastructure or for supporting customer infrastructure. Therefore, we can create one telegram channel for all employees who may need information about the state of the nodes, then add to it and configure our bot so that the responsible persons will see and solve the problems in a timely manner.
## 4. Results
**Kubernetes**
In order to configure Kubernetes, we first downloaded the latest version from the official website, along with a file containing the checksum. We verified the downloaded file, transferred it to the necessary directories, and then installed it. The version was then checked to ensure proper installation.
**Minikube**
Minikube was also downloaded and installed. It was then started and configured to use the Docker driver.
**Docker**
Docker was used to build and tag our project images, and then pushed to a public Docker hub repository.
**Configuring Minikube**
Minikube was configured to use the EphemeralContainers feature, and set to start with specific extra config options. Pods were then run and the connection between them was tested.
**Configuring CNI**
CNI (Container Networking Interface) was automatically configured when the pods were started. We verified the configuration by pinging the containers within the pods.
**Configuring Docker containers inside pods**
We connected to the containers inside the pods using kubectl exec and port-forwarding. We encountered a problem when trying to connect via a browser, but were able to troubleshoot and resolve the issue.
**Monitoring and logging in Kubernetes**
Overall, we were able to successfully implement the architecture for logging in Kubernetes, integrate and deploy microservices using Docker containers, configure CNI, and set up layered logging using Prometheus and a Telegram bot for notifications. We also were able to set up operations service level agreement and detect anomalies/attacks in Kubernetes logs. The final outcome of the project was the creation of an event real-time monitoring app with logger tracking that sends Telegram notifications if there is a problem.
### 4.1 Project Sources
Here are links for sources codes that we created for our project:
1. <a href="https://hub.docker.com/repositories/syr94">Docker hub with our images</a>
2. <a href="https://github.com/syr94/Innopolis_RP_1">Services sources and configs</a>
3. <a href="https://github.com/rail-yaushev/AlertBot">Alert bot source</a>
### 4.2 Project Architecture
#### 4.2.1 Nodes
```
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 13d9h v1.25.3
```
#### 4.2.2 Pods
```kubectl get pods -n default```
```
NAME READY STATUS RESTARTS AGE
alertmanager-my-release-kube-prometheus-alertmanager-0 2/2 Running 0 4h40m
api 1/1 Running 0 24h
api-55bdf4ff6f-qcrkq 1/1 Running 2 (27h ago) 3d
auth 1/1 Running 0 24h
cortex-deployment-64fdc5c86c-sd9wb 1/1 Running 1 (27h ago) 2d3h
my-release-kube-prometheus-blackbox-exporter-77dbb4855f-9jdpn 1/1 Running 0 4h40m
my-release-kube-prometheus-operator-85f4c98c9d-q9n7v 1/1 Running 0 4h40m
my-release-kube-state-metrics-76bc59d6b7-n5rlj 1/1 Running 0 4h40m
my-release-node-exporter-d5nlc 1/1 Running 0 4h40m
prometheus-my-release-kube-prometheus-prometheus-0 2/2 Running 0 4h40m
```
```kubectl get pods -n monitoring```
```
NAME READY STATUS RESTARTS AGE
zabbix-agent-qfw6b 1/1 Running 0 5h52m
zabbix-kube-state-metrics-67d5bcb795-cv2tl 1/1 Running 0 5h52m
zabbix-proxy-887b59dbb-wbx7k 1/1 Running 0 5h52m
```
#### 4.2.3 Services
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 36m
api NodePort 10.105.10.171 <none> 9997:30831/TCP 3d20h
cortex-service NodePort 10.104.166.60 <none> 9009:32709/TCP 46h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d9h
my-release-kube-prometheus-alertmanager ClusterIP 10.97.29.38 <none> 9093/TCP 37m
my-release-kube-prometheus-blackbox-exporter ClusterIP 10.109.9.9 <none> 19115/TCP 37m
my-release-kube-prometheus-operator ClusterIP 10.104.34.142 <none> 8080/TCP 37m
my-release-kube-prometheus-prometheus ClusterIP 10.100.142.108 <none> 9090/TCP 37m
my-release-kube-state-metrics ClusterIP 10.99.5.179 <none> 8080/TCP 37m
my-release-node-exporter ClusterIP 10.96.146.101 <none> 9100/TCP 37m
prometheus-operated ClusterIP None <none> 9090/TCP 36m
```
### 4.3 CNI inside cluster
Pinging **auth** container from **api**:

Pinging **api** container from **auth**:

### 4.4 Monitoring
We have running prometheus service:
```kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090```

We have running prometheus alert manager service:
```kubectl port-forward --namespace default svc/my-release-kube-prometheus-alertmanager 9093:9093```

### 4.5 Security
We have running cortex service:
```kubectl port-forward --namespace default svc/cortex-service 9009:9009```

## 5 Troubles we faced
### 5.1 Problems with cloud clusters:
Many of them have problems with accessing from Russia.
One can be accessing from Russia it is **Yandex k8s clusters**:
Installing Yandex CLI:
```curl -sSL https://storage.yandexcloud.net/yandexcloud-yc/install.sh | bash```

```yc vpc subnet create --name sne-project-1-subnet --description "subnet for project" --folder-id b1g2ll9msc2d3bcaka5i --network-id enp8jal46h3jlp80olfj```
```yc iam service-account create --name my-robot --description "this is my favorite service account"```
### 5.2 Problem with starting simple

```
Name: auth
Namespace: default
Priority: 0
Service Account: default
Node: minikube/192.168.49.2
Start Time: Sat, 21 Jan 2023 20:28:39 +0500
Labels: run=auth
Annotations: <none>
Status: Running
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
Containers:
auth:
Container ID: docker://859f728630d027510a9c3e41d7644bcd76b8a3f8eb8368bbe459d2e41b90789a
Image: syr94/sne_project_auth:latest
Image ID: docker-pullable://syr94/sne_project_auth@sha256:6f5c65f4b97ace67da8f1dc5a3b255769c3120bcf0647c14ea3de7de61a848d6
Port: 6082/TCP
Host Port: 0/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 21 Jan 2023 20:30:24 +0500
Finished: Sat, 21 Jan 2023 20:30:24 +0500
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 21 Jan 2023 20:29:31 +0500
Finished: Sat, 21 Jan 2023 20:29:31 +0500
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n69dc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-n69dc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 106s default-scheduler Successfully assigned default/auth to minikube
Normal Pulled 103s kubelet Successfully pulled image "syr94/sne_project_auth:latest" in 2.135630025s
Normal Pulled 100s kubelet Successfully pulled image "syr94/sne_project_auth:latest" in 2.074820365s
Normal Pulled 83s kubelet Successfully pulled image "syr94/sne_project_auth:latest" in 2.093669816s
Normal Created 54s (x4 over 103s) kubelet Created container auth
Normal Started 54s (x4 over 103s) kubelet Started container auth
Normal Pulled 54s kubelet Successfully pulled image "syr94/sne_project_auth:latest" in 2.0170994s
Warning BackOff 16s (x8 over 99s) kubelet Back-off restarting failed container
Normal Pulling 3s (x5 over 105s) kubelet Pulling image "syr94/sne_project_auth:latest"
```
### 5.3 Problem with installing prometheus with helm:

It just freezes. And this is ent point.
## 6. Conclusion
In conclusion, our research project aimed to design and implement an architecture for logging in Kubernetes, as well as integrate and deploy microservices using Docker containers and the fastAPI framework. We also configured CNI and deployed layered logging in Kubernetes. Additionally, we established Operations Service Level Agreement and implemented a monitoring solution for detecting anomalies and potential security threats in Kubernetes logs. The final outcome of the project was the creation of an event monitoring app with real-time logger tracking and Telegram notifications for alerting of any issues. Overall, the project aimed to gain a deeper understanding of the containerization and management of workloads using Docker and Kubernetes, and to enhance the security and monitoring capabilities of our applications.
## 7. Video Presentation link
https://disk.yandex.ru/d/dhVKcVHH2RWQ1Q
## 8. References
1. https://thechief.io/c/hrishikesh/4-challenges-kubernetes-log-processing/
2. https://kubernetes.io/ru/docs/tasks/tools/install-minikube/
3. https://minikube.sigs.k8s.io/docs/handbook/config/
4. https://kubernetes.io/ru/docs/setup/learning-environment/minikube/
5. https://hub.docker.com/r/ubuntu/cortex
6. https://habr.com/ru/company/agima/blog/524654/
7. https://helm.sh/docs/intro/install/
8. https://blog.marcnuri.com/prometheus-grafana-setup-minikube
9. https://www.youtube.com/watch?v=-lLT0vlaBpk
10. https://habr.com/ru/company/agima/blog/524654/
11. https://stackoverflow.com/questions/70470152/alertmanager-cluster-status-is-disabled
12. https://helm.sh/docs/topics/chart_repository/