# Specification
| Server | CPU | RAM | Function | IP | OS |
| ------------- | ----- | ---- | -------- | --- | --- |
| kube-master | 2 CPU | 2GB | Open5GS | 192.168.1.5 | Ubuntu Server 24.04 |
| kube-worker-1 | 2 CPU | 2 GB | EURANSIM | 192.168.1.6 | Ubuntu Server 24.04 |
# Referensi
https://luislogs.com/posts/security-and-observability-with-cilium-on-my-5g-network/
# Pre-requisite
1. Install kubernetes, helm (urusan masing-masing :pray:)
2. install cilium CLI
```
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
```
3. install cilium
```
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --namespace kube-system \
--set config.enable-dns-proxy=true \
--set config.enable-sctp=true
```
tunggu sampai berhasil inisialisasi, cek lewat:
```
kubectl get daemonset -n kube-system
```
4. tambah repo helm kube prometheus stack
```
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
5. buat values.yml
```
grafana:
enabled: true
prometheus:
enabled: true
prometheusSpec:
# This is the crucial part. It tells Prometheus Operator which ServiceMonitors to use.
serviceMonitorSelector:
matchLabels:
release: kube-prometheus-stack # Make sure this matches your Helm release name!
# This tells Prometheus to look for ServiceMonitors in ALL namespaces.
# This is important if Open5GS is in a different namespace than your monitoring stack.
serviceMonitorNamespaceSelector: {}
# The following section for additionalScrapeConfigsSecret is not needed for Open5GS metrics
# and you can remove it or keep it disabled.
additionalScrapeConfigsSecret:
enabled: false
securityContext:
fsGroup: 1000
runAsUser: 65534
runAsNonRoot: true
alertmanager:
enabled: true
# Other components can be configured here as well (kube-state-metrics, node-exporter, etc.)
# For example, if you don't need node-exporter on every node:
# prometheus-node-exporter:
# enabled: true
# kube-state-metrics:
# enabled: true
```
8. buat presistent volume
nano kube-prometheus-stack-pv.yaml
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: kube-prometheus-stack-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data/prometheus
```
nano kube-prometheus-stack-pvc.yaml
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kube-prometheus-stack-pvc
namespace: monitoring
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: kube-prometheus-stack-pv
```
apply yamlnya:
```
kubectl create namespace monitoring
kubectl apply -f kube-prometheus-stack-pv.yaml
kubectl apply -f kube-prometheus-stack-pvc.yaml
kubectl delete secret kube-prometheus-stack-admission -n monitoring --ignore-not-found=true
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring -f values.yml
```
cek pass grafana:
```
kubectl get secret --namespace monitoring kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
terus tinggal:
```
sudo chown -R 65534:65534 /mnt/data/prometheus
```
9. enable promethues & grafana di cilium
```
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true
```
10. enable hubble di cilium
```
helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set hubble.ui.frontend.server.ipv6.enabled=false --set hubble.enabled=true --set hubble.metrics.enableOpenMetrics=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"
```
cek hubble lewat:
```
cilium status --wait
```
11. install hubble client
```
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
```
cek hubble lewat:
```
hubble version
```
12. Akses Prometheus & grafana
ekspos pake nodeport:
```
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--reuse-values \
--set prometheus.service.type=NodePort \
--set grafana.service.type=NodePort
```
cari port
```
kubectl get svc -n monitoring
```
# Open5GS with Cilium
1. bikin hubble bisa monitor packet
buat file yaml `l4-egress-to-dns.yaml`
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-to-dns"
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/instance: open5gs
egress:
- toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
```
apply pake:
```
kubectl apply -f l4-egress-to-dns.yaml
```
port forward:
```
cilium hubble port-forward&
```
## Open5gs
1. buat yaml
bikin file open5gs.yaml:
```
hss:
enabled: false
# SCP is enabled and gets the SBI label
scp:
enabled: true
podLabels:
sbi: enabled
mme:
enabled: false
pcrf:
enabled: false
# PCF is enabled and gets the SBI label
pcf:
enabled: true
podLabels:
sbi: enabled
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack
# SMF is enabled and gets the SBI label
smf:
enabled: true
podLabels:
sbi: enabled
config:
pcrf:
enabled: false
pcf:
enabled: true
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack
sgwc:
enabled: false
sgwu:
enabled: false
# AMF is enabled and gets the SBI label
amf:
enabled: true
podLabels:
sbi: enabled
config:
guamiList:
- plmn_id:
mcc: "999"
mnc: "70"
amf_id:
region: 2
set: 1
taiList:
- plmn_id:
mcc: "999"
mnc: "70"
tac: [1]
plmnList:
- plmn_id:
mcc: "999"
mnc: "70"
s_nssai:
- sst: 1
sd: "0x111111"
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack
# UPF is enabled and gets the SBI label
upf:
enabled: true
podLabels:
sbi: enabled
# CORRECT configuration using the chart's schema
config:
subnetList:
- addr: "10.45.0.0/16" # IMPORTANT: Match your UE IP Pool
dnn: "internet" # Match the DNN for this network
enableNAT: true # This is the crucial setting that enables NAT
dev: "ogstun" # The virtual device for the NAT interface
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack
# NSSF is enabled and gets the SBI label
nssf:
enabled: true
podLabels:
sbi: enabled
config:
nsiList:
- uri: ""
sst: 1
sd: "0x111111"
# Adding other default components to ensure they also get the label
nrf:
enabled: true
podLabels:
sbi: enabled
ausf:
enabled: true
podLabels:
sbi: enabled
udm:
enabled: true
podLabels:
sbi: enabled
udr:
enabled: true
podLabels:
sbi: enabled
bsf:
enabled: true
podLabels:
sbi: enabled
webui:
ingress:
enabled: false
populate:
enabled: true
initCommands:
- open5gs-dbctl add_ue_with_slice 999700000000001 465B5CE8B199B49FAA5F0A2EE238A6BC E8ED289DEBA952E4283B54E88E6183CA internet 1 111111
- open5gs-dbctl add_ue_with_slice 999700000000002 465B5CE8B199B49FAA5F0A2EE238A6BC E8ED289DEBA952E4283B54E88E6183CA internet 1 111111
mongodb:
enabled: true
auth:
enabled: false
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 20
timeoutSeconds: 15
failureThreshold: 6
successThreshold: 1
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 15
failureThreshold: 6
persistence:
enabled: true
size: 2Gi
storageClass: "open5gs-mongodb-pv"
```
nano pv-mongodb.yaml
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: open5gs-mongodb-pv # You can choose a unique name for your PV
spec:
capacity:
storage: 2Gi # Must be equal to or greater than the PVC request
volumeMode: Filesystem
accessModes:
- ReadWriteOnce # MongoDB typically requires RWO
persistentVolumeReclaimPolicy: Retain # Or Delete, depending on your needs
storageClassName: open5gs-mongodb-pv
hostPath:
path: /mnt/data/mongodb # The directory you created on your node
```
nano sc-open5gs-mongodb.yaml
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: open5gs-mongodb-pv # This name must match what the PVC and PV are using
provisioner: kubernetes.io/no-provisioner # Indicates PVs must be manually created
volumeBindingMode: WaitForFirstConsumer # Good for local storage like hostPath
```
2. install lewat helm & bikin namespace
```
kubectl create namespace open5gs
kubectl apply -f pv-mongodb.yaml
kubectl apply -f sc-open5gs-mongodb.yaml
sudo mkdir -p /mnt/data/mongodb
helm install open5gs oci://registry-1.docker.io/gradiantcharts/open5gs --version 2.2.9 --values open5gs.yaml -n open5gs
```
tunggu semua pods kebuat
```
watch kubectl get pods -n open5gs
```
kalo mongodb udah RUNNING 0/1, di node-nya:
```
sudo chown -R 1001:1001 /mnt/data/mongodb
sudo chmod -R 770 /mnt/data/mongodb
```
### Troubleshoot
kalo pake server lab, install pake ini
```
kubectl apply -f pv-mongodb.yaml
kubectl apply -f sc-open5gs-mongodb.yaml
helm install open5gs oci://registry-1.docker.io/gradiantcharts/open5gs \
--version 2.2.9 \
--namespace open5gs \
--create-namespace \
--values open5gs.yaml \
--set mongodb.image.repository=bitnami/mongodb \
--set mongodb.image.tag=4.4.15-debian-10-r8
```
langsung delete pod mongodbnya
kalo upf crash gara2, Operation not permitted disable app armor di node open5gs
```
sudo systemctl stop apparmor
sudo systemctl disable apparmor
sudo apt-get purge apparmor -y
```
### Uninstall
```
helm uninstall open5gs -n open5gs
kubectl delete pvc open5gs-mongodb -n open5gs
kubectl delete pv open5gs-mongodb-pv
kubectl delete sc open5gs-mongodb-pv
```
ssh ke node mongodb:
```
sudo rm -rf /mnt/data/mongodb/
sudo mkdir -p /mnt/data/mongodb
sudo chown -R 1001:1001 /mnt/data/mongodb
sudo chmod -R 770 /mnt/data/mongodb
```
## UERANSIM
3. buat yaml ueransim
nano ueransim.yaml
```
amf:
hostname: "open5gs-amf-ngap.open5gs.svc.cluster.local"
ip: null
mcc: '999'
mnc: '70'
sst: 1
sd: "0x111111"
tac: '0001'
ues:
enabled: true
count: 2
initialMSISDN: '0000000001'
```
nano l4-ingress-amf-from-gnb-sctp.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-ingress-from-gnb-sctp"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
"k8s:app.kubernetes.io/name": amf
ingress:
- fromEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": ueransim-gnb
"k8s:io.kubernetes.pod.namespace": ueransim
toPorts:
- ports:
- port: "38412"
protocol: SCTP
```
4. jalanin
```
kubectl create namespace ueransim
kubectl label namespace ueransim name=ueransim
kubectl apply -f l4-ingress-amf-from-gnb-sctp.yaml
kubectl patch configmap -n kube-system cilium-config --type merge -p '{"data":{"enable-sctp":"true"}}'
helm install ueransim-gnb oci://registry-1.docker.io/gradiant/ueransim-gnb --version 0.2.6 --values ueransim.yaml -n ueransim
```
3. akses hubble ui
nyalain
```
cilium hubble ui
```
ssh pake cmd
```
ssh -L 12000:localhost:12000 <USER>@<IP>
```
## Cilium
1. Allow port utk komunikasi smf-upf & gnb-upf & prometheus-amf
nano l4-egress-smf-to-upf.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-smf-to-core" # Or your chosen name
namespace: open5gs
spec:
endpointSelector:
matchLabels:
"k8s:app.kubernetes.io/name": smf
egress:
# Rule for CoreDNS
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules: { dns: [{ matchPattern: "*" }] }
- ports:
- port: "53"
protocol: TCP
rules: { dns: [{ matchPattern: "*" }] }
# Rule for UPF (PFCP N4 interface)
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": upf
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "8805"
protocol: UDP
# Rule for NRF (SBI)
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": nrf
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
# Rule for PCF (SBI)
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": pcf
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
# Rule for UDM (SBI)
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": udm
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
# Rule for AMF (SBI)
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": amf
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
# --- ADD THIS RULE FOR SCP ---
- toEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": scp
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
# --- END OF SCP RULE ---
```
nano l4-ingress-upf-from-smf.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-ingress-upf-from-smf"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf
ingress:
- fromEndpoints:
- matchLabels:
app.kubernetes.io/name: smf
toPorts:
# For SMF-UPF internet traffic
- ports:
- port: "8805"
protocol: UDP
```
nano l4-ingress-upf-from-gnb.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-ingress-upf-from-gnb"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf
ingress:
- fromEndpoints:
- matchLabels:
app.kubernetes.io/component: gnb
toPorts:
# For SMF-UPF internet traffic
- ports:
- port: "2152"
protocol: UDP
```
nano l3-egress-upf-to-any.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l3-egress-upf-to-any"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf
egress:
- toCIDRSet:
- cidr: 0.0.0.0/0
# except private CIDR
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
```
nano allow-prometheus-to-amf.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-prometheus-to-amf-metrics"
namespace: open5gs # Policy applied in the AMF's namespace
spec:
endpointSelector:
matchLabels:
"k8s:app.kubernetes.io/name": amf # Selects the AMF pods
ingress:
- fromEndpoints:
- matchLabels:
# Assuming Prometheus pods have these labels in the 'monitoring' namespace
# You'll need to verify the exact labels on your Prometheus pods
"k8s:io.kubernetes.pod.namespace": monitoring
"k8s:app.kubernetes.io/name": prometheus # Or "kube-prometheus-stack-prometheus", etc.
# Check your Prometheus pod labels with:
# kubectl get pods -n monitoring -l app.kubernetes.io/instance=kube-prometheus-stack --show-labels
toPorts:
- ports:
- port: "9090"
protocol: TCP
```
nano l7-egress-scp-to-nrf.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
# The name of the policy object in Kubernetes
name: "l7-egress-scp-to-nrf"
# The namespace where your Open5GS components are running
namespace: open5gs
spec:
# This policy applies to the SCP pod(s)
endpointSelector:
matchLabels:
app.kubernetes.io/name: scp
# This rule defines outgoing (egress) traffic that is allowed
egress:
- toEndpoints:
# The destination is the NRF pod(s)
- matchLabels:
app.kubernetes.io/name: nrf
toPorts:
- ports:
# The SBI port for 5G core communication
- port: "7777"
protocol: TCP
```
nano l4-egress-to-mongodb.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
# The name of the policy object in Kubernetes
name: "l4-egress-to-mongodb"
# The namespace where your Open5GS components are running
namespace: open5gs
spec:
# This policy applies to all pods with the label 'app.kubernetes.io/instance: open5gs'.
# This is a common label for all the Open5GS core components in your deployment.
endpointSelector:
matchLabels:
app.kubernetes.io/instance: open5gs
# This defines an outgoing (egress) rule for the selected pods.
egress:
- toEndpoints:
# The destination is the MongoDB pod(s).
- matchLabels:
app.kubernetes.io/name: mongodb
toPorts:
- ports:
# The standard port for MongoDB.
- port: "27017"
protocol: TCP
```
apply:
```
kubectl apply -f l4-egress-smf-to-upf.yaml
kubectl apply -f l4-ingress-upf-from-smf.yaml
kubectl apply -f l4-ingress-upf-from-gnb.yaml
kubectl apply -f l3-egress-upf-to-any.yaml
kubectl apply -f allow-prometheus-to-amf.yaml
kubectl apply -f l7-egress-scp-to-nrf.yaml
kubectl apply -f l4-egress-to-mongodb.yaml
```
2. allow port mongodb
nano l4-egress-populate-webui-udr-pcf-to-mongodb.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-populate-to-mongodb"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/component: populate
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: mongodb
toPorts:
# For mongodb access
- ports:
- port: "27017"
protocol: TCP
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-webui-to-mongodb"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: webui
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: mongodb
toPorts:
# For mongodb access
- ports:
- port: "27017"
protocol: TCP
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-udr-to-mongodb"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: udr
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: mongodb
toPorts:
# For mongodb access
- ports:
- port: "27017"
protocol: TCP
---
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-pcf-to-mongodb"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: pcf
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: mongodb
toPorts:
# For mongodb access
- ports:
- port: "27017"
protocol: TCP
```
nano l7-egress-to-scp.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-egress-to-scp"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
sbi: enabled
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: scp
toPorts:
# For SBI communication
- ports:
- port: "7777"
protocol: TCP
rules:
http: [{}]
```
nano l7-egress-scp-to-core.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-egress-scp-to-core"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: scp
egress:
- toEndpoints:
- matchLabels:
sbi: enabled
toPorts:
# For SBI communication
- ports:
- port: "7777"
protocol: TCP
rules:
http: [{}]
```
nano l7-ingress-from-scp.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-ingress-from-scp"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
sbi: enabled
ingress:
- fromEndpoints:
- matchLabels:
app.kubernetes.io/name: scp
- matchLabels:
sbi: enabled
toPorts:
# For SBI communication
- ports:
- port: "7777"
protocol: TCP
rules:
http: [{}]
```
nano allow-prometheus-to-upf.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-prometheus-to-upf-metrics"
namespace: open5gs # Policy created in the UPF's namespace
spec:
endpointSelector:
matchLabels:
# Selects your UPF pod(s)
# Based on previous info, UPF pods have app.kubernetes.io/name=upf
"k8s:app.kubernetes.io/name": upf
ingress:
- fromEndpoints:
- matchLabels:
# Selects your Prometheus pod(s) in the 'monitoring' namespace
"k8s:io.kubernetes.pod.namespace": monitoring
"k8s:app.kubernetes.io/name": prometheus # This label was present on your Prometheus pod
toPorts:
- ports:
- port: "9090"
protocol: TCP
```
nano allow-prometheus-scrape.yaml
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-prometheus-scrape
namespace: open5gs # IMPORTANT: This policy must be in the target namespace
spec:
# Apply this policy to all pods in the 'open5gs' namespace
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
# Allow traffic from any pod in the 'monitoring' namespace...
- namespaceSelector:
matchLabels:
# This label is typically present on the monitoring namespace
# Verify with 'kubectl get ns monitoring --show-labels'
kubernetes.io/metadata.name: monitoring
# ...that also has this label (the Prometheus pods)
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
ports:
# Allow traffic specifically on the metrics port
- protocol: TCP
port: 9090
```
nano allow-scp-to-amf-sbi.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-scp-to-amf-sbi"
namespace: open5gs # Policy in AMF's namespace
spec:
endpointSelector:
matchLabels:
"k8s:app.kubernetes.io/name": amf # Selects AMF pods
ingress:
- fromEndpoints:
- matchLabels:
"k8s:app.kubernetes.io/name": scp # Selects SCP pods
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
```
nano allow-ingress-to-mongodb.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-ingress-to-mongodb"
namespace: open5gs
spec:
# This policy applies TO the MongoDB pod
endpointSelector:
matchLabels:
app.kubernetes.io/name: mongodb
# This rule defines what INCOMING (ingress) traffic is allowed
ingress:
- fromEndpoints:
# Allow traffic FROM any pod with this label
# (This is a common label for all your Open5GS components)
- matchLabels:
app.kubernetes.io/instance: open5gs
toPorts:
- ports:
# On the standard MongoDB port
- port: "27017"
protocol: TCP
```
apply:
```
kubectl apply -f l7-egress-to-scp.yaml
kubectl apply -f l7-ingress-from-scp.yaml
kubectl apply -f l7-egress-scp-to-nrf.yaml
kubectl apply -f l4-egress-populate-webui-udr-pcf-to-mongodb.yaml
kubectl apply -f allow-prometheus-to-upf.yaml
kubectl apply -f allow-prometheus-scrape.yaml
kubectl apply -f allow-scp-to-amf-sbi.yaml
kubectl apply -f l7-egress-scp-to-core.yaml
kubectl apply -f allow-ingress-to-mongodb.yaml
```
3. ga ngerti lagi
nano allow-upf-gtpu-egress.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-open5gs-upf-gtpu-egress"
namespace: open5gs # Ensure this is the namespace of your UPF pods
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf # Selects your Open5GS UPF pods
# You can add app.kubernetes.io/instance: open5gs if needed for more specificity
# app.kubernetes.io/instance: open5gs
egress:
- toCIDR:
- "10.0.0.98/32" # The destination IP address for GTP-U traffic
toPorts:
- ports:
- port: "8805"
protocol: UDP
```
nano allow-smf-diameter-egress.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-open5gs-smf-diameter-egress"
namespace: open5gs # Ensure this is the namespace of your SMF pods
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: smf # Selects your Open5GS SMF pods
# You can add app.kubernetes.io/instance: open5gs if needed for more specificity
# app.kubernetes.io/instance: open5gs
egress:
- toCIDR:
- "10.105.251.24/32" # The destination IP for Diameter (e.g., PCF/CHF)
toPorts:
- ports:
- port: "3868"
protocol: TCP
```
nano allow-pfcp-policy.yaml
```
# This file contains two policies to allow PFCP communication between SMF and UPF.
# Policy 1: Allow SMF to send PFCP traffic to UPF (Egress from SMF)
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-smf-to-upf-pfcp-egress"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
# Selects the Open5GS SMF pods
app.kubernetes.io/name: smf
egress:
- toEndpoints:
# Selects the Open5GS UPF pods as the destination
- matchLabels:
app.kubernetes.io/name: upf
toPorts:
- ports:
- port: "8805" # Standard PFCP port
protocol: UDP
---
# Policy 2: Allow UPF to receive PFCP traffic from SMF (Ingress to UPF)
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-pfcp-from-smf-to-upf-ingress"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
# Selects the Open5GS UPF pods
app.kubernetes.io/name: upf
ingress:
- fromEndpoints:
# Selects the Open5GS SMF pods as the source
- matchLabels:
app.kubernetes.io/name: smf
toPorts:
- ports:
- port: "8805" # Standard PFCP port
protocol: UDP
```
nano allow-dns-policy.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-dns-lookup"
namespace: open5gs # This policy will apply to the open5gs namespace
spec:
# This selects all pods in the namespace. You could make it more specific
# by adding a selector for the SMF pod if you wish.
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
# This standard label selects the Kubernetes DNS service endpoint
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
# Optional but recommended: also allow DNS over TCP
rules:
dns:
- matchPattern: "*"
- ports:
- port: "53"
protocol: TCP
rules:
dns:
- matchPattern: "*"
```
nano allow-amf-sbi.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-amf-sbi-egress"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
# This policy applies to the AMF pod
app.kubernetes.io/name: amf
egress:
# This rule allows the AMF to talk to the AUSF, UDM, and SCP
- toEndpoints:
- matchLabels:
# Allow to AUSF
app.kubernetes.io/name: ausf
- matchLabels:
# Allow to UDM
app.kubernetes.io/name: udm
- matchLabels:
# Allow to SCP (Service Communication Proxy)
app.kubernetes.io/name: scp
toPorts:
- ports:
- port: "7777" # The standard SBI port
protocol: TCP
```
nano allow-scp-sbi.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-scp-sbi-egress"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
# This policy applies to the Service Communication Proxy (SCP)
app.kubernetes.io/name: scp
egress:
# This rule allows the SCP to talk to all other core NFs
- toEndpoints:
- matchLabels:
app.kubernetes.io/name: nrf
- matchLabels:
app.kubernetes.io/name: ausf
- matchLabels:
app.kubernetes.io/name: udm
- matchLabels:
app.kubernetes.io/name: pcf
- matchLabels:
app.kubernetes.io/name: nssf
- matchLabels:
app.kubernetes.io/name: smf
toPorts:
- ports:
- port: "7777" # The standard SBI port
protocol: TCP
```
nano allow-ausf-sbi.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-ausf-sbi-egress"
namespace: open5gs
spec:
endpointSelector:
matchLabels:
# This policy applies to the AUSF pod
app.kubernetes.io/name: ausf
egress:
# This rule allows the AUSF to send its registration to the SCP
- toEndpoints:
- matchLabels:
# Allow to SCP
app.kubernetes.io/name: scp
toPorts:
- ports:
- port: "7777"
protocol: TCP
```
nano allow-all-internal.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-all-open5gs-internal-sbi"
namespace: open5gs
spec:
# This policy applies to ALL pods in the open5gs namespace
endpointSelector: {}
# Allow them to send traffic to any other pod in the same namespace on the SBI port
egress:
- toEndpoints:
- matchLabels:
# any pod in the open5gs namespace
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
```
nano allow-open5gs-dns.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l4-egress-to-dns" # You can rename this if you like
namespace: open5gs
spec:
# This selector applies the policy to all pods in the namespace
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
# This standard label selects the Kubernetes DNS service
k8s:io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
rules:
dns:
- matchPattern: "*"
- ports:
- port: "53"
protocol: TCP
rules:
dns:
- matchPattern: "*"
```
nano allow-all-internal-sbi.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-all-open5gs-internal-sbi"
namespace: open5gs
spec:
# This policy selects EVERY pod in the open5gs namespace
endpointSelector: {}
# This rule allows them to send traffic TO any other pod
# in the same namespace on the main communication port (7777)
egress:
- toEndpoints:
- matchLabels:
# The key "k8s:io.kubernetes.pod.namespace" is a special label
# that Cilium adds to identify pods in a namespace.
"k8s:io.kubernetes.pod.namespace": open5gs
toPorts:
- ports:
- port: "7777"
protocol: TCP
```
nano final-ueransim-policy.yaml
```
# final-ueransim-policies.yaml
# Policy #1: Defines traffic for the UE Simulator Pod(s)
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-ue-traffic"
namespace: ueransim
spec:
# Select only the UE pod(s) using their unique component label
endpointSelector:
matchLabels:
app.kubernetes.io/component: ues
# Egress Rule: Allow the UE to send traffic TO the gNB
egress:
- toEndpoints:
- matchLabels:
app.kubernetes.io/component: gnb
# Ingress Rule: Allow the UE to receive traffic FROM the gNB
ingress:
- fromEndpoints:
- matchLabels:
app.kubernetes.io/component: gnb
---
# Policy #2: Defines traffic for the gNB Pod(s)
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-gnb-traffic"
namespace: ueransim
spec:
# Select only the gNB pod(s) using their unique component label
endpointSelector:
matchLabels:
app.kubernetes.io/component: gnb
# Ingress Rules: Allow the gNB to receive traffic FROM...
ingress:
# ...the UE pod in its own namespace
- fromEndpoints:
- matchLabels:
app.kubernetes.io/component: ues
# ...and from any pod in the open5gs namespace (for AMF replies)
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
# Egress Rules: Allow the gNB to send traffic TO...
egress:
# ...the CoreDNS service for name resolution
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
# ...and to any pod in the open5gs namespace (for N2/N3)
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
```
nano allow-n2-policy.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
# This policy allows the N2 (SCTP) interface connection
name: "allow-n2-from-gnb-to-amf"
namespace: open5gs
spec:
# This policy applies to the AMF pod(s)
endpointSelector:
matchLabels:
app.kubernetes.io/name: amf
# This rule allows INCOMING (ingress) traffic
ingress:
- fromEndpoints:
# From pods in the 'ueransim' namespace that have the gnb label
- matchLabels:
"k8s:io.kubernetes.pod.namespace": ueransim
app.kubernetes.io/name: ueransim-gnb
toPorts:
- ports:
# The standard 5G NGAP port
- port: "38412"
protocol: SCTP
```
nano allow-ueransim-dns.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
# This policy will be created in the 'ueransim' namespace
name: "allow-dns-for-ueransim"
namespace: ueransim
spec:
# This policy applies to all pods in the ueransim namespace
endpointSelector: {}
# This rule allows outgoing (egress) traffic for DNS
egress:
- toEndpoints:
- matchLabels:
# This is the standard label for the Kubernetes DNS service
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY # Allows both UDP and TCP for DNS
```
nano allow-n3-policy.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
# This policy allows the N3 (GTP-U) user plane connection
name: "allow-n3-from-gnb-to-upf"
namespace: open5gs
spec:
# This policy applies to the UPF pod(s)
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf
# This rule allows INCOMING (ingress) traffic
ingress:
- fromEndpoints:
# From pods in the 'ueransim' namespace that have the gnb label
- matchLabels:
"k8s:io.kubernetes.pod.namespace": ueransim
app.kubernetes.io/name: ueransim-gnb
toPorts:
- ports:
# The standard 5G GTP-U port
- port: "2152"
protocol: UDP
```
apply:
```
kubectl apply -f allow-smf-diameter-egress.yaml
kubectl apply -f allow-upf-gtpu-egress.yaml
kubectl apply -f allow-pfcp-policy.yaml
kubectl apply -f allow-dns-policy.yaml
kubectl apply -f allow-amf-sbi.yaml
kubectl apply -f allow-scp-sbi.yaml
kubectl apply -f allow-ausf-sbi.yaml
kubectl apply -f allow-open5gs-dns.yaml
kubectl apply -f l4-ingress-amf-from-gnb-sctp.yaml
kubectl apply -f allow-ueransim-dns.yaml
kubectl apply -f allow-n2-policy.yaml
kubectl apply -f allow-n3-policy.yaml
```
nano final-policies.yaml
```
# final-policies.yaml (Version 2)
# ===== POLICY FOR open5gs NAMESPACE =====
# Allows pods within open5gs to communicate with each other, DNS,
# AND allows incoming traffic from the ueransim namespace.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-open5gs-internal-and-dns"
namespace: open5gs
spec:
endpointSelector: {}
# Rule for Ingress (Incoming traffic)
ingress:
# Rule 1: Allow from other pods in the same (open5gs) namespace
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
# Rule 2 (THE FIX): Also allow from any pod in the ueransim namespace
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": ueransim
# Rule for Egress (Outgoing traffic)
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- { port: "53", protocol: ANY }
---
# ===== POLICY FOR ueransim NAMESPACE =====
# This policy is correct and does not need to be changed.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-ueransim-internal-and-egress"
namespace: ueransim
spec:
endpointSelector: {}
ingress:
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": ueransim
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": ueransim
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": open5gs
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- { port: "53", protocol: ANY }
---
# ===== POLICY FOR UPF INTERNET ACCESS =====
# This policy is correct and does not need to be changed.
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-upf-to-internet"
namespace: open5gs
spec:
endpointSelector:
matchLabels: { app.kubernetes.io/name: upf }
egress:
- toCIDR:
- 0.0.0.0/0
```
apply
```
kubectl apply -f final-policies.yaml
```
## Testing
1. install iperf (master node)
nano iperf-server.yaml
```
# iperf-server.yaml
apiVersion: v1
kind: Service
metadata:
name: iperf-server-svc
namespace: default
spec:
ports:
- port: 5201
protocol: TCP
selector:
app: iperf-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-server
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: iperf-server
template:
metadata:
labels:
app: iperf-server
spec:
# This forces the pod to run on your master node
nodeSelector:
kubernetes.io/hostname: kube-master
# This allows the pod to run even if the master node is tainted
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: iperf-server
image: networkstatic/iperf3
args:
- "-s" # Run in server mode
ports:
- containerPort: 5201
```
nano allow-iperf-policy.yaml
```
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-upf-to-iperf-server"
# This policy must be in the UPF's namespace
namespace: open5gs
spec:
# This policy applies to the UPF pod(s)
endpointSelector:
matchLabels:
app.kubernetes.io/name: upf
# This rule allows outgoing (egress) traffic
egress:
- toEndpoints:
# To pods in the 'default' namespace with the 'iperf-server' label
- matchLabels:
"k8s:io.kubernetes.pod.namespace": default
app: iperf-server
toPorts:
- ports:
# On the iperf3 port
- port: "5201"
protocol: TCP
```
apply
```
kubectl apply -f iperf-server.yaml
kubectl apply -f allow-iperf-policy.yaml
```
2. install iperf di ue
ke ue
```
kubectl exec deployment/ueransim-gnb-ues -ti -n ueransim -- bash
```
install iperf
```
apt-get update
apt-get install -y iperf3
```
cari ip uesimtunn0
```
ip addr show uesimtun0
```
3. tes berdasar waktu
jalanin (ubah <uesimtun0> jadi ipnya)
```
iperf3 -c iperf-server-svc.default.svc.cluster.local -B <uesimtun0> -i 1 -t 30
```
> -t 30 artinya 30 detik
3. tes berdasar size
kalo mau jalanin test tapi dari size
```
iperf3 -c iperf-server-svc.default.svc.cluster.local -B <uesimtun0> -n 500M
```
> -n 500 artinya transfer data sizenya 500mb