# Comparativa Traefik vs Envoy amb Podinfo
## Taula Comparativa
| Característica | Traefik | Envoy |
|----------------|---------|-------|
| **Facilitat d'ús** | ⭐⭐⭐⭐⭐ Molt intuïtiu | ⭐⭐⭐ Requereix més coneixement |
| **Auto-discovery** | ⭐⭐⭐⭐⭐ Natiu per Docker/K8s | ⭐⭐⭐ Necessita configuració |
| **Rendiment** | ⭐⭐⭐⭐ Excel·lent | ⭐⭐⭐⭐⭐ Òptim |
| **Dashboard** | ⭐⭐⭐⭐⭐ UI web inclòs | ⭐⭐ Només API/Admin |
| **Configuració** | Labels/Annotations | Fitxers YAML/xDS API |
| **Ecosistema** | Standalone | Base de service mesh (Istio) |
| **TLS automàtic** | ⭐⭐⭐⭐⭐ Let's Encrypt integrat | ⭐⭐⭐ Manual |
| **Observabilitat** | Bones mètriques | Mètriques molt detallades |
---
## 1. Docker Compose + Traefik
### Avantatges
- Configuració mitjançant labels (molt intuïtiva)
- Auto-discovery automàtic dels contenidors
- Dashboard web inclòs
- Zero-downtime deployments
- Suport natiu per Let's Encrypt
### Inconvenients
- Menys granularitat en configuració avançada
- Limitacions en observabilitat comparada amb Envoy
### docker-compose-traefik.yml
```yaml
version: '3.8'
services:
traefik:
image: traefik:v3.0
container_name: traefik
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--metrics.prometheus=true"
ports:
- "80:80"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-net
podinfo-1:
image: stefanprodan/podinfo:latest
container_name: podinfo-1
environment:
- PODINFO_UI_COLOR=#34577c
labels:
- "traefik.enable=true"
- "traefik.http.routers.podinfo.rule=Host(`podinfo.localhost`)"
- "traefik.http.routers.podinfo.entrypoints=web"
- "traefik.http.services.podinfo.loadbalancer.server.port=9898"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.path=/healthz"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.interval=10s"
networks:
- traefik-net
podinfo-2:
image: stefanprodan/podinfo:latest
container_name: podinfo-2
environment:
- PODINFO_UI_COLOR=#d9534f
labels:
- "traefik.enable=true"
- "traefik.http.routers.podinfo.rule=Host(`podinfo.localhost`)"
- "traefik.http.routers.podinfo.entrypoints=web"
- "traefik.http.services.podinfo.loadbalancer.server.port=9898"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.path=/healthz"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.interval=10s"
networks:
- traefik-net
podinfo-3:
image: stefanprodan/podinfo:latest
container_name: podinfo-3
environment:
- PODINFO_UI_COLOR=#5cb85c
labels:
- "traefik.enable=true"
- "traefik.http.routers.podinfo.rule=Host(`podinfo.localhost`)"
- "traefik.http.routers.podinfo.entrypoints=web"
- "traefik.http.services.podinfo.loadbalancer.server.port=9898"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.path=/healthz"
- "traefik.http.services.podinfo.loadbalancer.healthcheck.interval=10s"
networks:
- traefik-net
networks:
traefik-net:
driver: bridge
```
### Com provar-ho
```bash
# Iniciar els serveis
docker-compose -f docker-compose-traefik.yml up -d
# Veure el dashboard de Traefik
open http://localhost:8080
# Provar el load balancing (executa diverses vegades)
curl http://podinfo.localhost
# Veure les mètriques
curl http://localhost:8080/metrics
# Aturar tot
docker-compose -f docker-compose-traefik.yml down
```
---
## 2. Docker Compose + Envoy
### Avantatges
- Control granular sobre load balancing
- Observabilitat excepcional
- Configuració molt flexible
- Rendiment superior en escenaris complexos
- Estàndard de facto en service mesh
### Inconvenients
- Configuració més complexa (fitxers YAML llargs)
- No té auto-discovery natiu
- Corba d'aprenentatge més pronunciada
- No inclou dashboard web per defecte
### docker-compose-envoy.yml
```yaml
version: '3.8'
services:
envoy:
image: envoyproxy/envoy:v1.29-latest
container_name: envoy
ports:
- "80:10000"
- "9901:9901" # Admin interface
volumes:
- ./envoy-config.yaml:/etc/envoy/envoy.yaml:ro
command: ["-c", "/etc/envoy/envoy.yaml"]
networks:
- envoy-net
depends_on:
- podinfo-1
- podinfo-2
- podinfo-3
podinfo-1:
image: stefanprodan/podinfo:latest
container_name: podinfo-1
environment:
- PODINFO_UI_COLOR=#34577c
networks:
- envoy-net
podinfo-2:
image: stefanprodan/podinfo:latest
container_name: podinfo-2
environment:
- PODINFO_UI_COLOR=#d9534f
networks:
- envoy-net
podinfo-3:
image: stefanprodan/podinfo:latest
container_name: podinfo-3
environment:
- PODINFO_UI_COLOR=#5cb85c
networks:
- envoy-net
networks:
envoy-net:
driver: bridge
```
### envoy-config.yaml
```yaml
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: local_route
virtual_hosts:
- name: podinfo_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: podinfo_cluster
timeout: 15s
clusters:
- name: podinfo_cluster
connect_timeout: 5s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
health_checks:
- timeout: 1s
interval: 10s
unhealthy_threshold: 2
healthy_threshold: 2
http_health_check:
path: "/healthz"
load_assignment:
cluster_name: podinfo_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: podinfo-1
port_value: 9898
- endpoint:
address:
socket_address:
address: podinfo-2
port_value: 9898
- endpoint:
address:
socket_address:
address: podinfo-3
port_value: 9898
admin:
address:
socket_address:
address: 0.0.0.0
port_value: 9901
```
### Com provar-ho
```bash
# Crear el fitxer de configuració d'Envoy (copia el contingut anterior)
# i guarda'l com envoy-config.yaml al mateix directori
# Iniciar els serveis
docker-compose -f docker-compose-envoy.yml up -d
# Veure l'admin interface d'Envoy
open http://localhost:9901
# Provar el load balancing (executa diverses vegades)
curl http://localhost
# Veure estadístiques dels clusters
curl http://localhost:9901/clusters
# Veure configuració
curl http://localhost:9901/config_dump
# Aturar tot
docker-compose -f docker-compose-envoy.yml down
```
---
## 3. Kubernetes + Traefik
### Avantatges
- Ingress Controller natiu de Kubernetes
- Configuració via Ingress resources (estàndard K8s)
- Auto-discovery automàtic de serveis
- Integració perfecta amb K8s services
- Dashboard web disponible
### Inconvenients
- Menys control granular que Envoy
- No és l'estàndard per service mesh
### traefik-kubernetes.yaml
```yaml
---
# Namespace
apiVersion: v1
kind: Namespace
metadata:
name: podinfo
---
# Traefik RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik
namespace: podinfo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: traefik
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses", "ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: traefik
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik
subjects:
- kind: ServiceAccount
name: traefik
namespace: podinfo
---
# Traefik Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
namespace: podinfo
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik
containers:
- name: traefik
image: traefik:v3.0
args:
- --api.insecure=true
- --providers.kubernetesingress=true
- --entrypoints.web.address=:80
- --metrics.prometheus=true
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
---
# Traefik Service
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: podinfo
spec:
type: LoadBalancer
selector:
app: traefik
ports:
- name: web
port: 80
targetPort: 80
- name: dashboard
port: 8080
targetPort: 8080
---
# Podinfo Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: podinfo
spec:
replicas: 3
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
spec:
containers:
- name: podinfo
image: stefanprodan/podinfo:latest
ports:
- containerPort: 9898
name: http
env:
- name: PODINFO_UI_COLOR
value: "#34577c"
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
---
# Podinfo Service
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: podinfo
spec:
selector:
app: podinfo
ports:
- port: 80
targetPort: 9898
name: http
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo
namespace: podinfo
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
ingressClassName: traefik
rules:
- host: podinfo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: podinfo
port:
number: 80
---
# IngressClass
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: traefik
spec:
controller: traefik.io/ingress-controller
```
### Com provar-ho
```bash
# Aplicar la configuració
kubectl apply -f traefik-kubernetes.yaml
# Esperar que els pods estiguin ready
kubectl wait --for=condition=ready pod -l app=podinfo -n podinfo --timeout=60s
kubectl wait --for=condition=ready pod -l app=traefik -n podinfo --timeout=60s
# Obtenir la IP del servei Traefik
kubectl get svc traefik -n podinfo
# Si estàs en Minikube
minikube service traefik -n podinfo
# Port-forward per accedir localment
kubectl port-forward -n podinfo svc/traefik 8080:80
# En una altra terminal, prova el load balancing
curl -H "Host: podinfo.local" http://localhost:8080
# Veure el dashboard
kubectl port-forward -n podinfo svc/traefik 8081:8080
open http://localhost:8081/dashboard/
# Veure els pods i logs
kubectl get pods -n podinfo
kubectl logs -n podinfo -l app=podinfo --tail=20
# Netejar
kubectl delete namespace podinfo
```
---
## 4. Kubernetes + Envoy
### Avantatges
- Control màxim sobre routing i policies
- Base per service mesh (Istio, Linkerd)
- Observabilitat de primer nivell
- Configuració molt granular
- Rendiment excepcional
### Inconvenients
- Configuració molt més complexa
- Requereix gestió manual de endpoints
- Corba d'aprenentatge elevada
- No és un Ingress Controller estàndard
### envoy-kubernetes.yaml
```yaml
---
# Namespace
apiVersion: v1
kind: Namespace
metadata:
name: podinfo
---
# Podinfo Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: podinfo
spec:
replicas: 3
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
spec:
containers:
- name: podinfo
image: stefanprodan/podinfo:latest
ports:
- containerPort: 9898
name: http
env:
- name: PODINFO_UI_COLOR
value: "#34577c"
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 5
periodSeconds: 10
---
# Podinfo Service
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: podinfo
spec:
selector:
app: podinfo
ports:
- port: 9898
targetPort: 9898
name: http
type: ClusterIP
---
# Envoy ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-config
namespace: podinfo
data:
envoy.yaml: |
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
route_config:
name: local_route
virtual_hosts:
- name: podinfo_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: podinfo_cluster
timeout: 15s
clusters:
- name: podinfo_cluster
connect_timeout: 5s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
health_checks:
- timeout: 1s
interval: 10s
unhealthy_threshold: 2
healthy_threshold: 2
http_health_check:
path: "/healthz"
load_assignment:
cluster_name: podinfo_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: podinfo.podinfo.svc.cluster.local
port_value: 9898
admin:
address:
socket_address:
address: 0.0.0.0
port_value: 9901
---
# Envoy Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: envoy
namespace: podinfo
spec:
replicas: 1
selector:
matchLabels:
app: envoy
template:
metadata:
labels:
app: envoy
spec:
containers:
- name: envoy
image: envoyproxy/envoy:v1.29-latest
command: ["/usr/local/bin/envoy"]
args:
- -c
- /etc/envoy/envoy.yaml
ports:
- containerPort: 10000
name: http
- containerPort: 9901
name: admin
volumeMounts:
- name: config
mountPath: /etc/envoy
volumes:
- name: config
configMap:
name: envoy-config
---
# Envoy Service
apiVersion: v1
kind: Service
metadata:
name: envoy
namespace: podinfo
spec:
type: LoadBalancer
selector:
app: envoy
ports:
- name: http
port: 80
targetPort: 10000
- name: admin
port: 9901
targetPort: 9901
```
### Com provar-ho
```bash
# Aplicar la configuració
kubectl apply -f envoy-kubernetes.yaml
# Esperar que els pods estiguin ready
kubectl wait --for=condition=ready pod -l app=podinfo -n podinfo --timeout=60s
kubectl wait --for=condition=ready pod -l app=envoy -n podinfo --timeout=60s
# Obtenir la IP del servei Envoy
kubectl get svc envoy -n podinfo
# Si estàs en Minikube
minikube service envoy -n podinfo
# Port-forward per accedir localment
kubectl port-forward -n podinfo svc/envoy 8080:80
# En una altra terminal, prova el load balancing
curl http://localhost:8080
# Veure l'admin interface
kubectl port-forward -n podinfo svc/envoy 9901:9901
open http://localhost:9901
# Veure estadístiques detallades
curl http://localhost:9901/stats
curl http://localhost:9901/clusters
# Veure els pods i logs
kubectl get pods -n podinfo
kubectl logs -n podinfo -l app=envoy --tail=50
# Netejar
kubectl delete namespace podinfo
```
---
## Recomanacions Finals
### Usa Traefik si:
- Necessites una solució ràpida i senzilla
- Vols auto-discovery sense configuració
- Valores el dashboard web integrat
- Treballes amb aplicacions típiques web/API
- Prefereixes configuració declarativa simple
### Usa Envoy si:
- Necessites control granular sobre routing
- Vols observabilitat avançada
- Estàs construint un service mesh
- Necessites rendiment màxim
- Tens requisits complexos de traffic management
- Vols seguir estàndards cloud-native moderns
### Consells per provar
1. Executa múltiples `curl` per veure el load balancing en acció
2. Atura un contenidor/pod i observa com els health checks funcionen
3. Revisa els dashboards i mètriques de cada solució
4. Compara la facilitat de configuració vs flexibilitat
### Recursos addicionals
- Traefik: https://doc.traefik.io/traefik/
- Envoy: https://www.envoyproxy.io/docs/envoy/latest/
- Podinfo: https://github.com/stefanprodan/podinfo