# Traffic Routing with Service Mesh In this handson, we will create a Kubernetes cluster, deploy istio service mesh. Perform some deployments using servich mesh, understand the observability aspects on traffic routing and do canary deployment with the help of istio. If you have a kubernetes cluster, you can use them or you can create a new Kubernetes cluster with Gardener. # Create your Gardener Kubernetes cluster. 1. Logon to the [Gardener Dashboard](https://dashboard.garden.canary.k8s.ondemand.com/login) and choose CREATE YOUR FIRST PROJECT. ![](https://i.imgur.com/0gAU1F3.png) 2. Provide a project Name, and optionally a Description, and a Purpose, and choose CREATE. **Note:** You will not be able to change the project Name later. The rest of the details are editable. :bangbang: Since we are going to create only trial clusters it will not be charged. No need to input the cost center. ![](https://i.imgur.com/muQ2lS8.png) 3. The result will be similar to this. ![](https://i.imgur.com/WAkvcZ7.png) 4. In the dashboard navigation on the left, choose CLUSTERS, and then choose the plus button. ![](https://i.imgur.com/qODnO51.png) 5. In the **Infrastructure** section choose **GCP** for IaaS provider. In the **Cluster Details** section of the configuration screen, change the autogenerated **Cluster Name**. Make sure that the **trial secret** selected in the **Infrastructure Details > Secret** dropdown is ```trial-secret-gcp``` and give minimum **worker nodes as 3**. ![](https://i.imgur.com/CaQvLRg.png) 6. While the cluster is in creating state, you can go through the more about [gardener](https://gardener.cloud) and [istio](https://istio.io/). ## Accessing cluster ## Accessing cluster and Toolset: 1. Gardener has a useful feature to open a [Web Terminals](https://github.com/gardener/dashboard/blob/master/docs/Webterminals.md) and communicate with the kubernetes cluster. We will be using this web terminal with our preconfigured image containing tools needed for this handson. 2. Select your Cluster --> Overview Page --> Access --> Terminal ![](https://i.imgur.com/Zw0oo5f.png) 3. Choose **Cluster** as terminal Target and delete the pre-loaded image and input the following image and press **Create** ``` kesavanking/dkom-toolbelt:2.0 ``` ![](https://i.imgur.com/MfCMaBT.png) 4. Wait till the pod gets created and you will see a terminal in your browser. check cluster acess with `kubectl cluster-info`. # Install Istio 1. With istioctl we can istio to our kubernetes cluster easily. ``` istioctl install --set profile=default ``` 2. Wait till the istio installation complete and check if pods are running. ``` kubectl get pods -n istio-system ``` 3. We need some more addons like prometheus,kiali and jaeger. Deploy those manifests as well. ``` kubectl apply -f handson/addons.yaml ``` 4. Check if all the addons are running in istio-system namespace. ``` kubectl get pods -n istio-system ``` ![](https://i.imgur.com/rvoUjr4.png) ## Microservices Deployment: The section contains 10-tier Microservices application for a web-based e-commerce platform written in different languages and they talk to each other. Create a namespace and label the namespace for istio-injection of envoy proxy ``` kubectl create namespace dev ``` ``` kubectl label namespace dev istio-injection=enabled ``` We already placed the deployment files in the **handson** directory of the web terminal image **Deploy workloads** ``` kubectl apply -f handson/kubernetes-manifests.yaml -n dev ``` **Deploy Virtualservices and Gateway** ``` kubectl apply -f handson/istio-manifests.yaml -n dev ``` You could see each pod has a special side-car envoy proxy running throughout your environment that intercepts all network communication between microservices Wait till all pods are ready. ``` kubectl get pods -n dev ``` ``` NAME READY STATUS RESTARTS AGE pod/adservice-6759f54c89-pv5wm 2/2 Running 0 11h pod/checkoutservice-cf87949cb-ws987 2/2 Running 0 11h pod/currencyservice-76fcbdf5f-hch7r 2/2 Running 0 11h pod/emailservice-7cf8496bc8-cc8gm 2/2 Running 0 11h pod/frontend-6c8d7df656-n49rf 2/2 Running 0 11h pod/loadgenerator-5fcb7c5f76-6h8kl 2/2 Running 0 11h pod/paymentservice-f5fff455-86whd 2/2 Running 0 11h pod/productcatalogservice-8459f66f56-lq8cg 2/2 Running 0 11h pod/recommendationservice-5bbb4f867f-gcnxv 2/2 Running 0 11h pod/shippingservice-7d7cdf797-2ddzx 2/2 Running 0 11h ``` ## Access the application Get the ingress-gateway IP ``` kubectl -n istio-system get service istio-ingressgateway ``` You will see an IP under **EXTERNAL_IP** Open browser url with http://EXTERNAL_IP ![](https://i.imgur.com/N4LMHBT.jpg) ## Observability Observability involves collecting logs, metrics, tracing the traffic and visualizing the service mesh. We will go through tracing and logging ## Observe Traffic flow To have a graphical view , we deployed [kiali ](https://kiali.io/) to observe the service mesh and see the traffic flow. ``` k get svc -n istio-system -l app=kiali ``` You will see an **EXTERNAL IP** Open browser with url http://EXTERNAL-IP:20001/ Credentials: admin/admin ![](https://i.imgur.com/ImGfygB.png) Click on `Graph Tab` When you observe there is load generator pod which keeps on sending requests to frontend microservice and in turn frontend connects to all other microservices. Click in each microservice and you can see the traffic flow in it. ![](https://i.imgur.com/T6ILeei.png) You can monitor complete workloads flow, and service flows. Close the opened port-forward by pressing `ctrl+c` ## Observe Tracing: As on-the-ground microservice practitioners are quickly realizing, the majority of operational problems that arise when moving to a distributed architecture are ultimately grounded in two areas: **networking and observability.** [Jaeger](https://www.jaegertracing.io/) helps to troubleshoot microservices-based distributed systems. Jaeger is already deployed along with istio ``` kubectl get svc -l app=jaeger -n istio-system ``` For Tracing service you should see an **External IP** Open browser with url http://EXTERNAL-IP ![](https://i.imgur.com/bz14l4c.png) Select the service and find the traces of it. You can drill down to each request call. For more info on [istio](https://istio.io/) and [Observability](https://istio.io/docs/tasks/observability/)... # Canary Deployment using Istio Kubernetes already provide a way to do version rollout and canary deployment.Although doing a rollout this way works in simple cases, it’s very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed. In this task you will partially migrate TCP traffic from an old to new version of the service using Istio’s weighted routing feature. Frontend relies on productcatalog service to list the products. We introduce a new version of `productcatalog`. 1. Add a label as `version1` fo the deployed prodcutcatalog. ``` kubectl patch deployments/productcatalogservice -p '{"spec":{"template":{"metadata":{"labels":{"version":"v1"}}}}}' -n dev ``` 2. Create a [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/) to define policies that apply to traffic intended for a service after routing has occurred. Create a file using `vim`. ``` apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productcatalogservice spec: host: productcatalogservice subsets: - labels: version: v1 name: v1 - labels: version: v2 name: v2 ``` Save the above file and do ``` kubectl apply -f <destinationrule file> -n dev ``` 3. Deploy product calatalog verson 2 with extra latency of 5 seconds. Create a file using `vim`. ``` apiVersion: apps/v1 kind: Deployment metadata: name: productcatalogservice-v2 spec: selector: matchLabels: app: productcatalogservice template: metadata: labels: app: productcatalogservice version: v2 spec: containers: - env: - name: PORT value: '3550' image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.1.3 livenessProbe: exec: command: - /bin/grpc_health_probe - -addr=:3550 name: server ports: - containerPort: 3550 readinessProbe: exec: command: - /bin/grpc_health_probe - -addr=:3550 resources: limits: cpu: 200m memory: 128Mi requests: cpu: 100m memory: 64Mi terminationGracePeriodSeconds: 5 ``` Save and apply the above file ``` kubectl apply -f <product-catalog-v2 file> -n dev ``` 4. Using kubectl get pods, verify that the `v2` pod is Running. ``` kubectl get pods -l app=productcatalogservice,version=v2 -n dev ``` 5. Create an Istio [VirtualService](https://istio.io/latest/docs/concepts/traffic-management/#virtual-services) to split incoming productcatalog traffic between v1 (75%) and v2 (25%). Create a file using `vim`. ``` apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productcatalogservice spec: hosts: - productcatalogservice http: - route: - destination: host: productcatalogservice subset: v1 weight: 75 - destination: host: productcatalogservice subset: v2 weight: 25 ``` Save and apply the file ``` kubectl apply -f <virtual-service file> -n dev ``` Open Kiali, and navigate to service graph.Make sure to enable **requests percentage** in **display** ![](https://i.imgur.com/peP6fOy.png) When you **zoom-in** to productcatalogservice, you should see that approximately 25% of productcatalog requests are going to v2. ![](https://i.imgur.com/TTybvuG.png) In this way, with Istio we can allow the two versions of the service to scale up and down independently, without affecting the traffic distribution between them. Istio’s service mesh provides the control necessary to manage traffic distribution with complete independence from deployment scaling. This allows for a simpler, yet significantly more functional, way to do canary test and rollout. ---- # Delay Injection In this task we will inject a sleep for the specified time to test the resiliency of your application. Duration on every call to the server it will send response after 5 seconds. Create a file using `vim`. ``` apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productcatalogservice spec: hosts: - productcatalogservice http: - fault: delay: fixedDelay: 5s percentage: value: 100 route: - destination: host: productcatalogservice subset: v1 weight: 75 - destination: host: productcatalogservice subset: v2 weight: 25 ``` Save and apply the file. `kubectl apply -f <delay file> -n dev` In a web browser, navigate again to the ingress gateway frontend. Refresh the homepage a few times. You should notice that periodically, the frontend is slower to load. Open jaeger and check the product catalog service traces. You could see the requests with response of `5s` ![](https://i.imgur.com/ZqrHGvs.png) :vertical_traffic_light: HAPPY ROUTING..:vertical_traffic_light: <img src="https://i.imgur.com/TuNd8iu.png" width="20"> https://istio.io