# Traffic control
## Routing requests with Istio
Clear up any resources:
```bash
kubectl delete deployment,svc,gateway,virtualservice,destinationrule --all -n istio-action
```
Deploy v1 of the catalog service:
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: catalog
---
apiVersion: v1
kind: Service
metadata:
labels:
app: catalog
name: catalog
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
app: catalog
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: catalog
version: v1
name: catalog
spec:
replicas: 1
selector:
matchLabels:
app: catalog
version: v1
template:
metadata:
labels:
app: catalog
version: v1
spec:
serviceAccountName: catalog
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: istioinaction/catalog:latest
imagePullPolicy: IfNotPresent
name: catalog
ports:
- containerPort: 3000
name: http
protocol: TCP
securityContext:
privileged: false
EOF
```
Create an Istio gateway:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: catalog-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: https-catalog
protocol: HTTP
hosts:
- "catalog.istioinaction.io"
EOF
```
Create a VirtualService:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog-vs-from-gw
spec:
hosts:
- "catalog.istioinaction.io"
gateways:
- catalog-gateway
http:
- route:
- destination:
host: catalog
port:
number: 80
EOF
```
We can now reach the catalog service from outside the cluster by calling into the Istio gateway:
```bash
curl http://localhost/items -H "Host: catalog.istioinaction.io" ; echo
```
Deploy v1 of the catalog service:
```bash
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: catalog
version: v2
name: catalog-v2
spec:
replicas: 1
selector:
matchLabels:
app: catalog
version: v2
template:
metadata:
labels:
app: catalog
version: v2
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SHOW_IMAGE
value: "true"
image: istioinaction/catalog:latest
imagePullPolicy: IfNotPresent
name: catalog
ports:
- containerPort: 3000
name: http
protocol: TCP
securityContext:
privileged: false
EOF
```
If you call the catalog service multiple times, some request is routed to v2. Let’s route all traffic to v1 of the catalog service.
We need to give Istio a hint about how to identify which workloads are v1 and which are v2. In our Kubernetes Deployment resource for v1 of the catalog service, we use the labels `app: catalog` and `version: v1`. For the Deployment that specifies v2 of catalog, we use the labels `app: catalog` and `version: v2`.
For Istio, we create a DestinationRule that specifies these different versions as subsets:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
spec:
host: catalog
subsets:
- name: version-v1
labels:
version: v1
- name: version-v2
labels:
version: v2
EOF
```
Let’s update our VirtualService to route all traffic to v1 of catalog:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog-vs-from-gw
spec:
hosts:
- "catalog.istioinaction.io"
gateways:
- catalog-gateway
http:
- route:
- destination:
host: catalog
subset: version-v1
port:
number: 80
EOF
```
At this point, all traffic is routed to v1 of the catalog service.

Now, we route any traffic that includes the HTTP header `x-istio-cohort: internal` to v2 of catalog.
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog-vs-from-gw
spec:
hosts:
- "catalog.istioinaction.io"
gateways:
- catalog-gateway
http:
- match:
- headers:
x-istio-cohort:
exact: "internal"
route:
- destination:
host: catalog
subset: version-v2
- route:
- destination:
host: catalog
subset: version-v1
EOF
```
Let's test:
```bash
curl http://localhost/items -H "Host: catalog.istioinaction.io" -H "x-istio-cohort: internal"
```
When we call our service, we still see v1 responses. However, if we send a request with the `x-istio-cohort` header equal to internal, we are routed to v2 of the catalog service and see the expected response.

For internal Service:
```yaml
gateways:
- mesh
```
For example:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
gateways:
- mesh # The virtual service applies to all sidecars in the mesh.
http:
- route:
- destination:
host: catalog
subset: version-v1
```
## Traffic shifting
In this section, we distribute all live traffic to a set of versions for a particular service based on weights. For example, we can specify a routing weight of 10% to v2, and 90% of the traffic will still go to v1.
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
gateways:
- mesh
http:
- route:
- destination:
host: catalog
subset: version-v1
weight: 90
- destination:
host: catalog
subset: version-v2
weight: 10
EOF
```
Let's test:
```bash
for i in {1..100}; do curl -s http://localhost/api/catalog -H "Host: webapp.istioinaction.io" | grep -i imageUrl; done | wc -l
```
## Traffic mirroring
To reduce risk during the deployment process, we can use traffic mirroring. This approach is to mirror production traffic to a new deployment that copies the production traffic and sends it to the new deployment out of band of any customer traffic.

Using the mirroring approach, we can direct real production traffic to our deployment and get real feedback about how new code will behave without impacting users. And Istio supports mirroring traffic, for example:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
gateways:
- mesh
http:
- route:
- destination:
host: catalog
subset: version-v1
weight: 100
mirror:
host: catalog
subset: version-v2
EOF
```
With this VirtualService definition, we route 100% of live traffic to v1 of the catalog service, but we also mirror the traffic to v2. This mirrored request cannot affect the real request because the Istio proxy that does the mirroring ignores any responses (success/failure) from the mirrored cluster.
```bash
curl -s http://localhost/api/catalog -H "Host: webapp.istioinaction.io"
```
And we get the log of v2 of the catalog service, we also see logging entries. The request that makes it to v1 is the live request, and that’s the response we see. The request that makes it to v2 is mirrored and sent as fire-and-forget.
Note that when the mirrored traffic makes it to catalog v2, the *Host* header has been modified to indicate that it is mirrored/shadowed traffic instead of `Host: catalog:8080`, it is `Host: catalog-shadow:8080`.
## Routing to services outside cluster
Let’s configure Istio to block external traffic. Run the following command to change Istio’s default from `ALLOW_ANY` to `REGISTRY_ONLY`.
```bash
istioctl install --set profile=demo --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
```

Since not all services live in the service mesh, we need a way for services inside the mesh to communicate with those outside the mesh.
Istio builds up an internal service registry of all the services that are known by the mesh and that can be accessed within the mesh. We use a ServiceEntry to add custom services.

Let's practice, first web create a sample forum application that uses `jsonplaceholder.typicode
.com`
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: forum
name: forum
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: forum
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: forum
version: v1
name: forum
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: forum
version: v1
template:
metadata:
labels:
app: forum
version: v1
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: istioinaction/forum:latest
imagePullPolicy: IfNotPresent
name: forum
ports:
- containerPort: 8080
name: http
protocol: TCP
securityContext:
privileged: false
EOF
```
Let’s try calling our new forum service from within the mesh:
```bash
curl http://localhost/api/users -H "Host: webapp.istioinaction.io"
```
We will receive a error:
```bash
error calling Forum service
```
Because we deny all traffic going outside the cluster, to allow this call to go through, we can create an Istio ServiceEntry resource to the `jsonplaceholder.typicode.com host`:
```bash
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: jsonplaceholder
spec:
hosts:
- jsonplaceholder.typicode.com
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
EOF
```

Let's test:
```bash
curl http://localhost/api/users -H "Host: webapp.istioinaction.io" | jq -r
```
###### tags: `istio`