Securing the Kubernetes cluster
When utilizing Kubernetes clusters, it is important to dive into some of the best practices to keep them secure.
Security for the Kubernetes cluster
Your role as an administrator will vary depending on the type of Kubernetes cluster you are running. If you run workloads in a managed cluster, some of the cluster security (such as security of the control plane components and operating system patches) is handled by the cloud provider. This means you as an administrator have to take part in securing some part of the cluster.
The code is run in the containers and the images used to spin up the clusters are still fully managed by you, so you should consider appropriate security controls. If we are on an on-premise Kubernetes cluster, we are fully responsible for securing the cluster. This includes hardening the master and worker nodes, securely configuring administrative interfaces, running containers with secure configurations, ensuring that the applications being deployed are free of any vulnerabilities and the list goes on. In essence, we are responsible for securing every component of the cluster.
In addition to the security controls, we will also need to establish appropriate monitoring controls in the cluster to be able to detect any malicious activities.
Security inside the cluster
As discussed in the intro to Kubernetes article, Kubernetes clusters come with several different components such as those that run in master nodes and those that run in worker nodes. Every component in the cluster including the nodes must be secured to achieve better security for the cluster.
Let us discuss some of the common security best practices to consider inside a cluster.
A control plane is installed on the master node of a cluster, and protecting a control plane is essential to the security of a cluster. By default, Kubernetes provides a reasonable level of security to the control plane components. For example, only an API server can directly communicate with the control plane components such as etcd. However, administrators should be careful when changing the defaults and additional hardening may be required in some cases.
For instance, an API server can be exposed for debugging purposes, which will enable any user to talk to the API server without requiring authentication and authorization when interacting with the cluster. This is done by modifying the kube-apiserver.yaml file available in /etc/kubernetes/manifests/ directory. This should be avoided as anonymous access to the API server will lead to full cluster compromise.
Similarly, other control plane components such as scheduler, controller manager and etcd must be protected by binding them to localhost and enforcing certificate-based authentication. The control plane components should also be regularly updated to the latest version.
Worker nodes are where the application workloads are typically run. A compromised node can lead to a full cluster compromise. The nodes’ host operating system must be fully patched and appropriately hardened.
Kubelet is one of the key components of the worker nodes. Other components in the cluster can interact with the Kubelet through an API. This Kubelet API can be configured to be accessible from any machine in the network with anonymous access. This will lead to information disclosure as well as remote command execution on the pods running on the affected node. So, such changes to the default configuration must be avoided.
A pod is the smallest unit of work in Kubernetes. One or more containers can run in a pod.
One of the commonly seen security issues with pods is overly permissive service accounts mounted onto them. When a pod is compromised, a service account is usually accessible by default. If this service account is overly permissive such as with cluster-admin privileges, the attacker can achieve full cluster compromise. So, it is recommended to use the least privileged access wherever possible.
In addition to it, all pods will be in a flat network by default and any pod can communicate with any other pod in the cluster. So, it is recommended to use network policies to control unwanted communications.
All the container security misconfigurations can be applied in Kubernetes environments too. When a container that is part of the application workload is compromised, it is possible to abuse misconfigurations and gain access to the underlying worker node and possibly master node.
As an example, a container started as privileged can lead to container escape and unauthorized access on the underlying worker node. Security hygiene must be maintained when running containers in Kubernetes clusters.
Ensuring a Kubernetes cluster is secure
A Kubernetes cluster is made of several components and the security of every component is crucial in protecting a cluster. A simple misconfiguration in one component can lead to full cluster compromise.
So, it is recommended to appropriately harden every component in the cluster.
Controlling access to the Kubernetes API, Kubernetes
Kubernetes components, Kubernetes