Spring on Kubernetes!
Workshop Materials: https://hackmd.io/@ryanjbaxter/spring-on-k8s-workshop
Ryan Baxter, Spring Cloud Engineer, VMware
Dave Syer, Spring Engineer, VMware
What You Will Do
- Create a basic Spring Boot app
- Build a Docker image for the app
- Push the app to a Docker repo
- Create deployment and service descriptors for Kubernetes
- Deploy and run the app on Kubernetes
- External configuration and service discovery
- Deploy the Spring PetClinic App with MySQL
Prerequisites
Everyone will need:
- wledge of Spring and Kubernetes (we will not be giving an introduction to either)
If you are following these notes from a event all the pre-requisites will be provided in the Lab. You only need to worry about these if you are going to work through the lab on your own.
- Depending on how you are doing this workshop will determine how you configure
kubectl
Doing The Workshop On Your Own
- If you are doing this workshop on your own you will need to have your own Kubernetes cluster and Docker repo that the cluster can access
- Docker Desktop and Docker Hub - Docker Desktop allows you to easily setup a local Kubernetes cluster (Mac, Windows). This in combination with Docker Hub should allow you to easily run through this workshop.
- Hosted Kubernetes Clusters and Repos - Various cloud providers such as Google and Amazon offer options for running Kubernetes clusters and repos in the cloud. You will need to follow instructions from the cloud provider to provision the cluster and repo as well configuring
kubectl
to work with these clusters.
Doing The Workshop in Strigo
-
Login To Strigo with the link and access code provided by .
-
Configuring kubectl
. Run this command in the terminal:
$ kind-setup
Cluster already active: kind
Setting up kubeconfig
-
Run the below command to verify kubectl is configured correctly
NOTE: it might take a minute or so after the VM launches to get the Kubernetes API server up and running, so your first few attempts at using kubectl
may be very slow or fail. After that it should be responsive.
Create a Spring Boot App
In the Lab:
-
Run these commands in your terminal (please copy them verbatim to make the rest of the lab run smoothly)
-
Open the IDE using the "IDE" button at the top of the lab - it might be obscured by the "Call for Assistance" button.
Working on your own:
- Click here to download a zip of the Spring Boot app
- Unzip the project to your desired workspace and open in your favorite IDE
Add A RestController
Modify K8sDemoApplication.java
and add a @RestController
Be sure to add the @RestController
annotation and not just the @GetMapping
Run The App
In a terminal window run
The app will start on port 8080
Test The App
Make an HTTP request to http://localhost:8080 in another terminal
Test Spring Boot Actuator
Spring Boot Actuator adds several other endpoints to our app
Be sure to stop the Java process before continuing on or else you might get port binding issues since Java is using port 8080
Containerize The App
The first step in running the app on Kubernetes is producing a container for the app we can then deploy to Kubernetes
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
Building A Container
- Spring Boot 2.3.x can build a container for you without the need for any additional plugins or files
- To do this use the Spring Boot Build plugin goal
build-image
- Running
docker images
will allow you to see the built container
Run The Container
Test The App Responds
Be sure to stop the docker container before continuing. You can stop the container and remove it by running $ docker rm -f k8s-demo-app
Putting The Container In A Registry
Run The Build And Deploy The Container
- You should be able to run the Maven build and push the container to the local container registry
- You can now see the image in the registry
Deploying To Kubernetes
- With our container build and deployed to a registry you can now run this container on Kubernetes
Deployment Descriptor
- Kubernetes uses YAML files to provide a way of describing how the app will be deployed to the platform
- You can write these by hand using the Kubernetes documentation as a reference
- Or you can have Kubernetes generate it for you using kubectl
- The
--dry-run
flag allows us to generate the YAML without actually deploying anything to Kubernetes
- The resulting
deployment.yaml
should look similar to this
Service Descriptor
- A service acts as a load balancer for the pod created by the deployment descriptor
- If we want to be able to we want to create a service for those pods
- The resulting
service.yaml
should look similar to this
Apply The Deployment and Service YAML
- The deployment and service descriptors have been created in the
/k8s
directory
- Apply these to get everything running
- If you have
watch
installed you can watch as the pods and services get created
- In a separate terminal window run
watch
is a useful command line tool that you can install on Linux and OSX. All it does is continuously executes the command you pass it. You can just run the kubectl
command specified after the watch
command but the output will be static as opposed to updating constantly.
Testing The App
- The service is assigned a cluster IP, which is only accessible from inside the cluster
- To access the app we can use
kubectl port-forward
- Now we can
curl
localhost:8080 and it will be forwarded to the service in the cluster
Congrats you have deployed your first app to Kubernetes 🎉
via GIPHY
Be sure to stop the kubectl port-forward
process before moving on
Exposing The Service
NOTE: LoadBalancer
features are platform specific. The visibility of your app after changing the service type might depend a lot on where it is deployed (e.g. per cloud provider).
-
If we want to expose the service publically we can change the service type to LoadBalancer
-
Open k8s/service.yaml
and change ClusterIp
to LoadBalancer
-
Now apply the updated service.yaml
Testing The Public LoadBalancer
- In a Cloud environment (Google, Amazon, Azure etc.), Kubernetes will assign the service an external ip
- It may take a minute or so for Kubernetes to assign the service an external IP, until it is assigned you might see
<pending>
in the EXTERNAL-IP
column
- For a local cluster we need to manually set the external IP address to the IP address of the Kubernetes node (the docker container running Kind in this ):
The -w
option of kubectl
lets you watch a single Kubernetes resource.
Best Practices
Liveness and Readiness Probes
- Kubernetes uses two probes to determine if the app is ready to accept traffic and whether the app is alive
- If the readiness probe does not return a
200
no trafic will be routed to it
- If the liveness probe does not return a
200
kubernetes will restart the Pod
- Spring Boot hof endpoints from the Actuator module that fit nicely into these use cases
- The
/health/readiness
endpoint indicates if the application is healthy, this fits with the readiness proble
- The
/health/liveness
endpoint serves application info, we can use this to make sure the application is "alive"
The /health/readiness
and /health/liveness
endpoints are only available in Spring Boot 2.3.x. The/health
and /info
endpoints are reasonable starting points in earlier versions.
Enable The Actuator Probe Endpoints
Open application.properties
in /src/main/resources
and add the following properties
Add The Readiness Probe
Add The Liveness Probe
Graceful Shutdown
- Due to the asynchronous way Kubnernetes shuts down applications there is a period of time when requests can be sent to the application while an application is being terminated.
- To deal with this we can configure a pre-stop sleep to allow enough time for requests to stop being routed to the application before it is terminated.
- Add a
preStop
command to the podspec
of your deployment.yaml
Handling In Flight Requests
- Our application could also be handling requests when it receives the notification that it need to shut down.
- In order for us to finish processing those requests before the applicaiton shuts down we can configure a "grace period" in our Spring Boot applicaiton.
- Open
application.properties
in /src/main/resources
and add
There is also a spring.lifecycle.timeout-per-shutdown-phase
(default 30s).
server.shutdown
is only available begining in Spring Boot 2.3.x
Update The Container & Apply The Updated Deployment YAML
Let's update the pom.xml
to configure the image name explicitly:
Then we can build and push the changes and re-deploy:
- If you use
watch -n 1 kubectl get all
to see all the Kubernetes resources you will be able to see this appen in real time
Cleaning Up
- Before we move on to the next section lets clean up everything we deployed
Skaffold
- Skaffold is a command line tool that facilitates continuous development for Kubernetes applications
- Simplifies the development process by combining multiple steps into one easy command
- Provides the building blocks for a CI/CD process
- Make sure you have Skaffold installed before continuing
Adding Skaffold YAML
- Skaffold is configured using…you guessed it…another YAML file
- We create a YAML file
skaffold.yaml
in the root of the
The builder
is the same one used by Spring Boot when it builds a container from the build plugins (you would see it logged on the console when you build the image). Instead of the buildpacks
builder you could use the custom
one (with the same dependencies
):
Development With Skaffold
- Skaffold makes some enhacements to our development workflow when using Kubernetes
- Skaffold will
- Build the app and create the container (buildpacks)
- Push the container to the registry (Docker)
- Apply the deployment and service YAMLs
- Stream the logs from the Pod to your terminal
- Automatically setup port forwarding
Testing Everything Out
- If you are
watch
ing your Kubernetes resources you will see the same resources created as before
- When running
skaffold dev --port-forward
you will see a line in your console that looks like
- In this case port
4503
will be forwarded to port 80
of the service
Make Changes To The Controller
- Skaffold is watching the project files for changes
- Open
K8sDemoApplication.java
and change the hello
method to return Hola World
- Once you save the file you will notice Skaffold rebuild and redeploy everthing with the new change
Cleaning Everything Up
- Once we are done, if we kill the
skaffold
process, skaffold will remove the deployment and service resources. Just hit CTRL-C
in the terminal where skaffold
is running..
Debugging With Skaffold
- Skaffold also makes it easy to attach a debugger to the container running in Kubernetes
- The
debug
command results in two ports being forwarded
- The http port,
4503
in the above example
- The remote debug port
5005
in the above example
- You can then setup the remote debug configuration in your IDE to attach to the process and set breakpoints just like you would if the app was running locally
- If you set a breakpoint where we return
Hola World
from the hello
method in K8sDemoApplication.java
and then issue our curl
command to hit the endpoint you should be able to step through the code

Setting Up A Remote Debug Configuration In VS Code
There are two options that work in VS Code locally as well as in the IDE in the lab. The first and simplest is Kubernetes specific, and that is:
- Browse to a running Pod in the Kubernetes view
- Right click on it and select "attach"
- Type in the port number in the control panel that pops up (5005 is used by Skaffold by default).

The other option is to set up a launcher that attaches to the remote process. This will work for any remote process (doesn't have to be running in Kubernetes).
- On the left hand side of the IDE tab, click the Run/Debug icon
- Click the
create launch.json file

- The IDE will create a default launch configuration for the current file and for
K8sDemoAppApplication
- Add another configuration for remote debugging
- Now select the
Debug (Attached)
option from the drop down and click the Run button
- This should attach the debugger to the remot port

Be sure to detach the debugger and kill the skaffold
process before continuing
Kustomize
- Kustomize another tool we can use in our Kubernetes toolbox that allows us to customize deployments to different environments
- We can start with a base set of resources and then apply customizations on top of those
- Features
- Allows easier deployments to different environments/providers
- Allows you to keep all the common properties in one place
- Generate configuration for specific environments
- No templates, no placeholder spaghetti, no environment variable overload
Getting Started With Kustomize
- Create a new directory in the root of our project called
kustomize/base
- Move the
deployment.yaml
and service.yaml
from the k8s
directory into kustomize/base
- Delete the
k8s
directory
kustomize.yaml
- In
kustomize/base
create a new file called kustomization.yaml
and add the following to it
NOTE: Optionally, you can now remove all the labels and annotations in the metadata of both objects and specs inside objects. Kustomize adds default values that link a service to a deployment. If there is only one of each in your manifest then it will pick something sensible.
Customizing Our Deployment
- Lets imagine we want to deploy our app to a QA environment, but in this environment we want to have two instances of our app running
- Create a new directory called
qa
under the kustomize
directory
- Create a new file in
kustomize/qa
called update-replicas.yaml
, this is where we will provide customizations for our QA environment
- Add the following content to
kustomize/qa/update-replicas.yaml
- Create a new file called
kustomization.yaml
in kustomize/qa
and add the following to it
- Here we tell Kustomize that we want to patch the resources from the
base
directory with the update-replicas.yaml
file
- Notice that in
update-replicas.yaml
we are just updating the properties we care about, in this case the replicas
Running Kustomize
- This is our base deployment and service resources
- Notice when we build the QA customization that the replicas property is updated to
2
Piping Kustomize Into Kubectl
- We can pipe the output from
kustomize
into kubectl
in order to use the generated YAML to deploy the app to Kubernetes
- If you are watching the pods in your Kubernetes namespace you will now see two pods created instead of one
Our service k8s-demo-app
will load balance requests between these two pods
Clean Up
- Before continuing clean up your Kubernetes environment
Using Kustomize With Skaffold
- Currently our Skaffold configuration uses
kubectl
to deploy our artifacts, but we can change that to use kustomize
- Change your
skaffold.yaml
to the following
- Notice now the
deploy
property has been changed from kubectl
to now use Kustomize
- Also notice that we have a new profiles section allowing us to deploy our QA configuration using Skaffold
Testing Skaffold + Kustomize
- If you run your normal
skaffold
commands it will use the deployment configuration from kustomize/base
- If you want to test out the QA deployment run the following command to activate the QA profile
Be sure to kill the skaffold
process before continuing
Externalized Configuration
- One of the 12 factors for cloud native apps is to externalize configuration
- Kubernetes provides support for externalizing configuration via config maps and secrets
- We can create a config map or secret easily using
kubectl
Using Config Maps In Our Apps
- There are a number of ways to consume the data from config maps in our apps
- Perhaps the easiest is to use the data as environment variables
- To do this we need to change our
deployment.yaml
in kustomize/base
- Add the
envFrom
properties above which reference our config map log-level
- Update the deployment by running
skaffold dev
(so we can stream the logs)
- If everything worked correctly you should see much more verbose logging in your console
Removing The Config Map and Reverting The Deployment
- Before continuing lets remove our config map and revert the changes we made to
deployment.yaml
- To delete the config map run the following command
- In
kustomize/base/deployment.yaml
remove the envFrom
properties we added
- Next we will use Kustomize to make generating config maps easier
Config Maps and Spring Boot Application Configuration
- In Spring Boot we usually place our configuration values in application properties or YAML
- Config Maps in Kubernetes can be populated with values from files, like properties or YAML files
- We can do this via
kubectl
No need to execute the above command, it is just an example, the following sections will show a better way
- We can then mount this config map as a volume in our container at a directory Spring Boot knows about and Spring Boot will automatically recognize the file and use it
Creating A Config Map With Kustomize
- Kustomize offers a way of generating config maps and secrets as part of our customizations
- Create a file called
application.yaml
in kustomize/base
and add the following content
- We can now tell Kustomize to generate a config map from this file, in
kustomize/base/kustomization.yaml
by adding the following snippet to the end of the file
- If you now run
$ kustomize build
you will see a config map resource is produced
By default kustomize
generates a random name suffix for the ConfigMap
. Kustomize will take care of reconciling this when the ConfigMap
is referenced in other places (ie in volumes). It does this to force a change to the Deployment
and in turn force the app to be restarted by Kubernetes. This isn't always what you want, for instance if the ConfigMap
and the Deployment
are not in the same Kustomization
. If you want to omit the random suffix, you can set behavior=merge
(or replace
) in the configMapGenerator
.
- Now edit
deployment.yaml
in kustomize/base
to have kubernetes create a volume for that config map and mount that volume in the container
- In the above
deployment.yaml
we are creating a volume named config-volume
from the config map named k8s-demo-app-config
- In the container we are mounting the volume named
config-volume
within the container at the path /workspace/config
- Spring Boot automatically looks in
./config
for application configuration and if present will use it (because the app is running in /workspace
)
Testing Our New Deployment
- If you run
$ skaffold dev --port-forward
everything should deploy as normal
- Check that the config map was generated
- Skaffold is watching our files for changes, go ahead and change
logging.level.org.springframework
from INFO
to DEBUG
and Skaffold will automatically create a new config map and restart the pod
- You should see a lot more logging in your terminal once the new pod starts
Be sure to kill the skaffold
process before continuing
Service Discovery
- Kubernetes makes it easy to make requests to other services
- Each service has a DNS entry in the container of the other services allowing you to make requests to that service using the service name
- For example, if there is a service called
k8s-workshop-name-service
deployed we could make a request from the k8s-demo-app
just by making an HTTP request to http://k8s-workshop-name-service
Deploying Another App
- In order to save time we will use an existing app that returns a random name
- The container for this service resides in Docker Hub (a public container registery)
- To make things easier we placed a Kustomization file in the GitHub repo that we can reference from our own Kustomization file to deploy the app to our cluster
Modify kustomization.yaml
- In your k8s-demo-app's
kustomize/base/kustomization.yaml
add a new resource pointing to the new app's kustomize
directory
Making A Request To The Service
- Modify the
hello
method of K8sDemoApplication.java
to make a request to the new service
Testing The App
- Deploy the apps using Skaffold
- This should deploy both the k8s-demo-app and the name-service app
- Because we deployed two services and supplied the
-port-forward
flag Skaffold will forward two ports
Hitting the service multiple times will return a different name
- Test the k8s-demo-app which should now make a request to the name-service
Making multiple requests should result in different names coming from the name-service
- Stop the Skaffold process to clean everything up before moving to the next step
Running The PetClinic App
- The PetClinic app is a popular demo app which leverages several Spring technologies
- Spring Data (using MySQL)
- Spring Caching
- Spring WebMVC

Deploying PetClinic
- We have a Kustomization that we can use to easily get it up and running
The above kustomize build
command may take some time to complete
- Head to
http://localhost:8080
(or click the "App" tab in the lab) and you should see the "Welcome" page
- To use the app you can go to "Find Owners", add yourself, and add your pets
- All this data will be stored in the MySQL database
Dissecting PetClinic
- Here's the
kustomization.yaml
that you just deployed:
- The relative paths
../../*
are all relative to this file. Clone the repository to look at those: git clone https://github.com/dsyer/docker-services
and look at layers/samples
.
- Important features:
base
: a generic Deployment
and Service
with a Pod
listening on port 8080
mysql
: a local MySQL Deployment
and Service
. Needs a PersistentVolume
so only works on Kubernetes clusters that have a default volume provider
transformers
: patches to the basic application deployment. The patches are generic and could be shared by multiple different applications.
env-config
: the base layer uses this ConfigMap
to expose environment variables for the application container. These entries are used to adapt the PetClinic to the Kubernetes environment.