Workshop Materials: https://hackmd.io/@ryanjbaxter/spring-on-k8s-workshop
Ryan Baxter, Spring Cloud Engineer, VMware
Dave Syer, Spring Engineer, VMware
Everyone will need:
If you are following these notes from a KubeAdademy event all the pre-requisites will be provided in the Lab. You only need to worry about these if you are going to work through the lab on your own.
kubectl
kubectl
to work with these clusters.Login To Strigo with the link and access code provided by KubeAcademy.
Configuring kubectl
. Run this command in the terminal:
$ kind-setup
Cluster already active: kind
Setting up kubeconfig
Run the below command to verify kubectl is configured correctly
$ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:43723
KubeDNS is running at https://127.0.0.1:43723/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NOTE: it might take a minute or so after the VM launches to get the Kubernetes API server up and running, so your first few attempts at using
kubectl
may be very slow or fail. After that it should be responsive.
In the Lab:
Run these commands in your terminal (please copy them verbatim to make the rest of the lab run smoothly)
$ cd demo
$ curl https://start.spring.io/starter.tgz -d artifactId=k8s-demo-app -d name=k8s-demo-app -d packageName=com.example.demo -d dependencies=web,actuator -d javaVersion=11 | tar -xzf -
Open the IDE using the "IDE" button at the top of the lab - it might be obscured by the "Call for Assistance" button.
Working on your own:
Modify K8sDemoApplication.java
and add a @RestController
Be sure to add the @RestController
annotation and not just the @GetMapping
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
@RestController
public class K8sDemoAppApplication {
public static void main(String[] args) {
SpringApplication.run(K8sDemoAppApplication.class, args);
}
@GetMapping("/")
public String hello() {
return "Hello World";
}
}
In a terminal window run
$ ./mvnw spring-boot:run
The app will start on port 8080
Make an HTTP request to http://localhost:8080 in another terminal
$ curl http://localhost:8080; echo
Hello World
Spring Boot Actuator adds several other endpoints to our app
$ curl localhost:8080/actuator | jq .
{
"_links": {
"self": {
"href": "http://localhost:8080/actuator",
"templated": false
},
"health": {
"href": "http://localhost:8080/actuator/health",
"templated": false
},
"info": {
"href": "http://localhost:8080/actuator/info",
"templated": false
}
}
Be sure to stop the Java process before continuing on or else you might get port binding issues since Java is using port 8080
The first step in running the app on Kubernetes is producing a container for the app we can then deploy to Kubernetes
build-image
$ ./mvnw spring-boot:build-image
docker images
will allow you to see the built container$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s-demo-app 0.0.1-SNAPSHOT ab449be57b9d 5 minutes ago 124MB
$ docker run --name k8s-demo-app -p 8080:8080 k8s-demo-app:0.0.1-SNAPSHOT
$ curl http://localhost:8080; echo
Hello World
Be sure to stop the docker container before continuing. You can stop the container and remove it by running $ docker rm -f k8s-demo-app
$ ./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=localhost:5000/apps/demo
$ docker push localhost:5000/apps/demo
$ curl localhost:5000/v2/_catalog
{"repositories":["apps/demo"]}
--dry-run
flag allows us to generate the YAML without actually deploying anything to Kubernetes$ mkdir k8s
$ kubectl create deployment k8s-demo-app --image localhost:5000/apps/demo -o yaml --dry-run=client > k8s/deployment.yaml
deployment.yaml
should look similar to thisapiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
replicas: 1
selector:
matchLabels:
app: k8s-demo-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
spec:
containers:
- image: localhost:5000/apps/demo
name: k8s-demo-app
resources: {}
status: {}
$ kubectl create service clusterip k8s-demo-app --tcp 80:8080 -o yaml --dry-run=client > k8s/service.yaml
service.yaml
should look similar to thisapiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
ports:
- name: 80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: k8s-demo-app
type: ClusterIP
status:
loadBalancer: {}
/k8s
directorywatch
installed you can watch as the pods and services get created$ watch -n 1 kubectl get all
$ kubectl apply -f ./k8s
Every 1.0s: kubectl get all Ryans-MacBook-Pro.local: Wed Jan 29 17:23:28 2020
NAME READY STATUS RESTARTS AGE
pod/k8s-demo-app-d6dd4c4d4-7t8q5 1/1 Running 0 68m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k8s-demo-app ClusterIP 10.100.200.243 <none> 80/TCP 68m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/k8s-demo-app 1/1 1 1 68m
NAME DESIRED CURRENT READY AGE
replicaset.apps/k8s-demo-app-d6dd4c4d4 1 1 1 68m
watch
is a useful command line tool that you can install on Linux and OSX. All it does is continuously executes the command you pass it. You can just run the kubectl
command specified after the watch
command but the output will be static as opposed to updating constantly.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k8s-demo-app ClusterIP 10.100.200.243 <none> 80/TCP 68m
kubectl port-forward
$ kubectl port-forward service/k8s-demo-app 8080:80
curl
localhost:8080 and it will be forwarded to the service in the cluster$ curl http://localhost:8080; echo
Hello World
Congrats you have deployed your first app to Kubernetes 🎉
Be sure to stop the kubectl port-forward
process before moving on
NOTE:
LoadBalancer
features are platform specific. The visibility of your app after changing the service type might depend a lot on where it is deployed (e.g. per cloud provider).
If we want to expose the service publically we can change the service type to LoadBalancer
Open k8s/service.yaml
and change ClusterIp
to LoadBalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
...
type: LoadBalancer
...
Now apply the updated service.yaml
$ kubectl apply -f ./k8s
<pending>
in the EXTERNAL-IP
column$ kubectl patch service k8s-demo-app -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.18.0.2"]}}'
$ kubectl get service k8s-demo-app -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-demo-app LoadBalancer 10.100.200.243 172.18.0.2 80:31428/TCP 85m
$ curl http://172.18.0.2; echo
Hello World
The -w
option of kubectl
lets you watch a single Kubernetes resource.
200
no trafic will be routed to it200
kubernetes will restart the Pod/health/readiness
endpoint indicates if the application is healthy, this fits with the readiness proble/health/liveness
endpoint serves application info, we can use this to make sure the application is "alive"The /health/readiness
and /health/liveness
endpoints are only available in Spring Boot 2.3.x. The/health
and /info
endpoints are reasonable starting points in earlier versions.
Open application.properties
in /src/main/resources
and add the following properties
management.endpoint.health.probes.enabled=true
management.health.livenessState.enabled=true
management.health.readinessState.enabled=true
apiVersion: apps/v1
kind: Deployment
metadata:
...
name: k8s-demo-app
spec:
...
template:
...
spec:
containers:
...
readinessProbe:
httpGet:
port: 8080
path: /actuator/health/readiness
apiVersion: apps/v1
kind: Deployment
metadata:
...
name: k8s-demo-app
spec:
...
template:
...
spec:
containers:
...
livenessProbe:
httpGet:
port: 8080
path: /actuator/health/liveness
preStop
command to the podspec
of your deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
name: k8s-demo-app
spec:
...
template:
...
spec:
containers:
...
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
application.properties
in /src/main/resources
and addserver.shutdown=graceful
There is also a spring.lifecycle.timeout-per-shutdown-phase
(default 30s).
server.shutdown
is only available begining in Spring Boot 2.3.x
Let's update the pom.xml
to configure the image name explicitly:
<properties>
...
<spring-boot.build-image.imageName>localhost:5000/apps/demo</spring-boot.build-image.imageName>
</properties>
Then we can build and push the changes and re-deploy:
$ ./mvnw clean spring-boot:build-image
$ docker push localhost:5000/apps/demo
$ kubectl apply -f ./k8s
watch -n 1 kubectl get all
to see all the Kubernetes resources you will be able to see this appen in real time$ kubectl delete -f ./k8s
$ skaffold version
v1.9.1
skaffold.yaml
in the root of the projectapiVersion: skaffold/v2beta5
kind: Config
metadata:
name: k-s-demo-app--
build:
artifacts:
- image: localhost:5000/apps/demo
buildpacks:
builder: gcr.io/paketo-buildpacks/builder:base-platform-api-0.3
dependencies:
paths:
- src
- pom.xml
deploy:
kubectl:
manifests:
- k8s/deployment.yaml
- k8s/service.yaml
The builder
is the same one used by Spring Boot when it builds a container from the build plugins (you would see it logged on the console when you build the image). Instead of the buildpacks
builder you could use the custom
one (with the same dependencies
):
custom:
buildCommand: ./mvnw spring-boot:build-image -D spring-boot.build-image.imageName=$IMAGE && docker push $IMAGE
$ skaffold dev --port-forward
watch
ing your Kubernetes resources you will see the same resources created as beforeskaffold dev --port-forward
you will see a line in your console that looks likePort forwarding service/k8s-demo-app in namespace rbaxter, remote port 80 -> address 127.0.0.1 port 4503
4503
will be forwarded to port 80
of the service$ curl localhost:4503; echo
Hello World
K8sDemoApplication.java
and change the hello
method to return Hola World
$ curl localhost:4503; echo
Hola World
skaffold
process, skaffold will remove the deployment and service resources. Just hit CTRL-C
in the terminal where skaffold
is running.....
WARN[2086] exit status 1
Cleaning up...
- deployment.apps "k8s-demo-app" deleted
- service "k8s-demo-app" deleted
$ skaffold debug --port-forward
...
Port forwarding service/k8s-demo-app in namespace rbaxter, remote port 80 -> address 127.0.0.1 port 4503
Watching for changes...
Port forwarding pod/k8s-demo-app-75d4f4b664-2jqvx in namespace rbaxter, remote port 5005 -> address 127.0.0.1 port 5005
...
debug
command results in two ports being forwarded
4503
in the above example5005
in the above exampleHola World
from the hello
method in K8sDemoApplication.java
and then issue our curl
command to hit the endpoint you should be able to step through the codeThere are two options that work in VS Code locally as well as in the IDE in the lab. The first and simplest is Kubernetes specific, and that is:
The other option is to set up a launcher that attaches to the remote process. This will work for any remote process (doesn't have to be running in Kubernetes).
create launch.json file
K8sDemoAppApplication
{
"version": "0.2.0",
"configurations": [
{
"type": "java",
"name": "Debug (Attach)",
"request": "attach",
"hostName": "localhost",
"port": 5005
},
{
"type": "java",
"name": "Debug (Launch) - Current File",
"request": "launch",
"mainClass": "${file}"
},
{
"type": "java",
"name": "Debug (Launch)-K8sDemoAppApplication<k8s-demo-app>",
"request": "launch",
"mainClass": "com.example.demo.K8sDemoAppApplication",
"projectName": "k8s-demo-app"
}
]
}
Debug (Attached)
option from the drop down and click the Run buttonBe sure to detach the debugger and kill the skaffold
process before continuing
kustomize/base
deployment.yaml
and service.yaml
from the k8s
directory into kustomize/base
k8s
directory$ mkdir -p kustomize/base
$ mv k8s/* kustomize/base
$ rm -Rf k8s
kustomize/base
create a new file called kustomization.yaml
and add the following to itapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
NOTE: Optionally, you can now remove all the labels and annotations in the metadata of both objects and specs inside objects. Kustomize adds default values that link a service to a deployment. If there is only one of each in your manifest then it will pick something sensible.
qa
under the kustomize
directorykustomize/qa
called update-replicas.yaml
, this is where we will provide customizations for our QA environmentkustomize/qa/update-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-demo-app
spec:
replicas: 2
kustomization.yaml
in kustomize/qa
and add the following to itapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patchesStrategicMerge:
- update-replicas.yaml
base
directory with the update-replicas.yaml
fileupdate-replicas.yaml
we are just updating the properties we care about, in this case the replicas
$ kustomize build ./kustomize/base
$ kustomize build ./kustomize/qa
...
spec:
replicas: 2
...
2
kustomize
into kubectl
in order to use the generated YAML to deploy the app to Kubernetes$ kustomize build kustomize/qa | kubectl apply -f -
Every 1.0s: kubectl get all Ryans-MacBook-Pro.local: Mon Feb 3 12:00:04 2020
NAME READY STATUS RESTARTS AGE
pod/k8s-demo-app-647b8d5b7b-r2999 1/1 Running 0 83s
pod/k8s-demo-app-647b8d5b7b-x4t54 1/1 Running 0 83s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k8s-demo-app ClusterIP 10.100.200.200 <none> 80/TCP 84s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/k8s-demo-app 2/2 2 2 84s
NAME DESIRED CURRENT READY AGE
replicaset.apps/k8s-demo-app-647b8d5b7b 2 2 2 84s
Our service k8s-demo-app
will load balance requests between these two pods
$ kustomize build kustomize/qa | kubectl delete -f -
kubectl
to deploy our artifacts, but we can change that to use kustomize
skaffold.yaml
to the followingapiVersion: skaffold/v2beta5
kind: Config
metadata:
name: k-s-demo-app
build:
artifacts:
- image: localhost:5000/apps/demo
buildpacks:
builder: gcr.io/paketo-buildpacks/builder:base-platform-api-0.3
dependencies:
paths:
- src
- pom.xml
deploy:
kustomize:
paths: ["kustomize/base"]
profiles:
- name: qa
deploy:
kustomize:
paths: ["kustomize/qa"]
deploy
property has been changed from kubectl
to now use Kustomizeskaffold
commands it will use the deployment configuration from kustomize/base
$ skaffold dev --port-forward
$ skaffold dev -p qa --port-forward
Be sure to kill the skaffold
process before continuing
kubectl
$ kubectl create configmap log-level --from-literal=LOGGING_LEVEL_ORG_SPRINGFRAMEWORK=DEBUG
$ kubectl get configmap log-level -o yaml
apiVersion: v1
data:
LOGGING_LEVEL_ORG_SPRINGFRAMEWORK: DEBUG
kind: ConfigMap
metadata:
creationTimestamp: "2020-02-04T15:51:03Z"
name: log-level
namespace: rbaxter
resourceVersion: "2145271"
selfLink: /api/v1/namespaces/default/configmaps/log-level
uid: 742f3d2a-ccd6-4de1-b2ba-d1514b223868
deployment.yaml
in kustomize/base
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: localhost:5000/apps/demo
name: k8s-demo-app
envFrom:
- configMapRef:
name: log-level
...
envFrom
properties above which reference our config map log-level
skaffold dev
(so we can stream the logs)deployment.yaml
$ kubectl delete configmap log-level
kustomize/base/deployment.yaml
remove the envFrom
properties we addedkubectl
$ kubectl create configmap k8s-demo-app-config --from-file ./path/to/application.yaml
No need to execute the above command, it is just an example, the following sections will show a better way
application.yaml
in kustomize/base
and add the following contentlogging:
level:
org:
springframework: INFO
kustomize/base/kustomization.yaml
by adding the following snippet to the end of the fileconfigMapGenerator:
- name: k8s-demo-app-config
files:
- application.yaml
$ kustomize build
you will see a config map resource is produced$ kustomize build kustomize/base
apiVersion: v1
data:
application.yaml: |-
logging:
level:
org:
springframework: INFO
kind: ConfigMap
metadata:
name: k8s-demo-app-config-fcc4c2fmcd
By default kustomize
generates a random name suffix for the ConfigMap
. Kustomize will take care of reconciling this when the ConfigMap
is referenced in other places (ie in volumes). It does this to force a change to the Deployment
and in turn force the app to be restarted by Kubernetes. This isn't always what you want, for instance if the ConfigMap
and the Deployment
are not in the same Kustomization
. If you want to omit the random suffix, you can set behavior=merge
(or replace
) in the configMapGenerator
.
deployment.yaml
in kustomize/base
to have kubernetes create a volume for that config map and mount that volume in the containerapiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: localhost:5000/apps/demo
name: k8s-demo-app
...
volumeMounts:
- name: config-volume
mountPath: /workspace/config
volumes:
- name: config-volume
configMap:
name: k8s-demo-app-config
deployment.yaml
we are creating a volume named config-volume
from the config map named k8s-demo-app-config
config-volume
within the container at the path /workspace/config
./config
for application configuration and if present will use it (because the app is running in /workspace
)$ skaffold dev --port-forward
everything should deploy as normal$ kubectl get configmap
NAME DATA AGE
k8s-demo-app-config-fcc4c2fmcd 1 18s
logging.level.org.springframework
from INFO
to DEBUG
and Skaffold will automatically create a new config map and restart the podBe sure to kill the skaffold
process before continuing
k8s-workshop-name-service
deployed we could make a request from the k8s-demo-app
just by making an HTTP request to http://k8s-workshop-name-service
kustomization.yaml
kustomize/base/kustomization.yaml
add a new resource pointing to the new app's kustomize
directoryapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
- https://github.com/ryanjbaxter/k8s-spring-workshop/name-service/kustomize/base
configMapGenerator:
- name: k8s-demo-app-config
files:
- application.yaml
hello
method of K8sDemoApplication.java
to make a request to the new service
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;
@SpringBootApplication
@RestController
public class K8sDemoAppApplication {
private RestTemplate rest = new RestTemplateBuilder().build();
public static void main(String[] args) {
SpringApplication.run(K8sDemoAppApplication.class, args);
}
@GetMapping("/")
public String hello() {
String name = rest.getForObject("http://k8s-workshop-name-service", String.class);
return "Hola " + name;
}
}
service.yaml
file$ skaffold dev --port-forward
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/k8s-demo-app-5b957cf66d-w7r9d 1/1 Running 0 172m
pod/k8s-workshop-name-service-79475f456d-4lqgl 1/1 Running 0 173m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k8s-demo-app LoadBalancer 10.100.200.102 35.238.231.79 80:30068/TCP 173m
service/k8s-workshop-name-service ClusterIP 10.100.200.16 <none> 80/TCP 173m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/k8s-demo-app 1/1 1 1 173m
deployment.apps/k8s-workshop-name-service 1/1 1 1 173m
NAME DESIRED CURRENT READY AGE
replicaset.apps/k8s-demo-app-5b957cf66d 1 1 1 172m
replicaset.apps/k8s-demo-app-fd497cdfd 0 0 0 173m
replicaset.apps/k8s-workshop-name-service-79475f456d 1 1 1 173m
-port-forward
flag Skaffold will forward two portsPort forwarding service/k8s-demo-app in namespace user1, remote port 80 -> address 127.0.0.1 port 4503
Port forwarding service/k8s-workshop-name-service in namespace user1, remote port 80 -> address 127.0.0.1 port 4504
$ curl localhost:4504; echo
John
Hitting the service multiple times will return a different name
$ curl localhost:4503; echo
Hola John
Making multiple requests should result in different names coming from the name-service
$ kustomize build https://github.com/dsyer/docker-services/layers/samples/petclinic?ref=HEAD | kubectl apply -f -
$ kubectl port-forward service/petclinic-app --address 0.0.0.0 8080:80
The above kustomize build
command may take some time to complete
http://localhost:8080
(or click the "App" tab in the lab) and you should see the "Welcome" pagekustomization.yaml
that you just deployed:apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- ../../mysql
namePrefix: petclinic-
transformers:
- ../../mysql/transformer
- ../../actuator
- ../../prometheus
images:
- name: dsyer/template
newName: dsyer/petclinic
configMapGenerator:
- name: env-config
behavior: merge
literals:
- SPRING_CONFIG_LOCATION=classpath:/,file:///config/bindings/mysql/meta/
- MANAGEMENT_ENDPOINTS_WEB_BASEPATH=/actuator
- DATABASE=mysql
../../*
are all relative to this file. Clone the repository to look at those: git clone https://github.com/dsyer/docker-services
and look at layers/samples
.base
: a generic Deployment
and Service
with a Pod
listening on port 8080mysql
: a local MySQL Deployment
and Service
. Needs a PersistentVolume
so only works on Kubernetes clusters that have a default volume providertransformers
: patches to the basic application deployment. The patches are generic and could be shared by multiple different applications.env-config
: the base layer uses this ConfigMap
to expose environment variables for the application container. These entries are used to adapt the PetClinic to the Kubernetes environment.