Thinking about when to implement security for Kubernetes is much like any other field of technology. If you’re a Systems Administrator or Infrastructure Engineer, you must understand certain platforms and systems prior to securing them. For example, if you’re running workloads in ESXi, you must first understand ESXi to properly secure it. If you’re a developer, you have to first understand overall programming, computer science concepts, and application architecture prior to properly securing code with the absolute best practices.
All fields of focus, including Kubernetes, have prerequisites to fully understand how to properly secure the environment.
In this blog post, you’ll learn about what you know should know prior to thinking about implementing security practices.
Understanding The Cluster Components
The most critical place to start, much like as a Systems Administrator or Infrastructure Engineer, is the cluster itself. The Kubernetes cluster has a wide range of components for both the control plane and the worker node.
Below is a full list with high-level explanations as the explanations could take up an entire chapter of a book in itself.
For the control plane, you have:
- API server: How you communicate with Kubernetes and how Kubernetes resources communicate with the cluster itself
- Etcd (cluster store): The database for Kubernetes. It stores all stateful data about the cluster.
- Controller Manager: All resources have a Controller (Deployments, Pods, Services, etc.). The Controller ensures that the current state is the desired state.
- Scheduler: Schedules Pods to run on certain worker nodes based on worker node consumption and availability.
For the worker node, you have:
- Kubelet: The agent that runs on each node. It registers each new node to the cluster. It also watches the API server for new tasks. For example, if a new Pod needs to be deployed.
- Container runtime: How containers know how to run and are able to run inside of Pods in Kubernetes.
- DNS: In the Kubernetes sense, DNS matches IP addresses to Kubernetes service names. DNS inside of Kubernetes is CoreDNS
- Kube-proxy: The internal networking of Kubernetes.
Without understanding each of the core components of Kubernetes, especially how they all communicate and operate with each other, means that you cannot properly secure them.
For example, by default, a lot of control planes when setting up Kubernetes in a cloud-based service have public IP addresses. That means your control plane is automatically, by default, at risk and exposed to the internet. If a malicious entity knows that, or can find the cluster on the internet, they can attempt brute force attacks which could cause issues like DDoS and entering your Kubernetes cluster where your entire application lives.
If you notice, a lot of the pieces that you must understand are all pieces that have existed for many years, way before Kubernetes. Infrastructure, operating systems, and virtual machines have needed security since the beginning and will continue to need security. APIs have always needed security. Databases have always needed security.
The only difference is the layers listed above are Kubernetes related, but they aren’t “Kubernetes specific”.
Understanding The Network
Networking in Kubernetes, much like networking for anything else in tech, is the core communication method for:
- Worker nodes
- Control planes
Without networking set up, Kubernetes wouldn’t work. Without a network framework/plugin installed after configuring the control plane, none of the Kubernetes nodes would be in a “Ready” state as they need networking and communication methods to properly be deployed.
Inside of Kubernetes, there are two networking components: kube-proxy and coreDNS.
Kube-proxy is the method of which Pods, Services, and other Kubernetes resources talk to and communicate with each other to send data back and forth.
The Kubernetes cluster has it’s own CIDR range, and Pods receive IP addresses from the available pool of IP addresses on the CIDR range. The CIDR range is defined by whoever, or whatever (the automation system), is creating the Kubernetes cluster. This in turn becomes the method at which Pods communicate with each other.
Kubernetes Services communicate with each other via IP addresses and DNS names. Services get IP addresses via the kube-proxy, same method as the Pods. They can also retrieve ClusterIPs and have dedicated ports, of which internal Kubernetes resources communication to Services can occur. Services have dedicated DNS names, which is why when outside entities communicate with an app running on Kubernetes, the communication is done via a Service.
This is all possible via the Container Network Interface (CNI), which is the available plugin standard for Kubernetes networking.
If you’re wondering about the actual servers themselves that run control planes and worker nodes - they communicate with each other outside of the kube-proxy. The communication for the servers occurs on the actual environment network. For example, if you’re running Kubernetes clusters in a data center, the servers that are running Kubernetes are talking to each other over the networking switch on the LAN. If your Kubernetes clusters are spanning across multiple data centers, traffic is being routed via a router or a firewall/router combination and packets are being sent to the network switches, which in turn send to the virtual machines running the Kubernetes cluster.
If you’ve worked with networking before, you’ll notice that everything explained here is networking-specific with a touch of “Kubernetes specific”. Meaning, 99% of networking in Kubernetes is general networking, which you must understand to properly secure the environment.
Understanding How Containers Work
Outside of the cluster/infrastructure/networking, is understanding how containers work.
Containers, by definition, are virtualized operating systems. Not to be confused with system virtualization, like ESXi or Hyper-V.
The gist is that system virtualization gave engineers a beautiful thing - they no longer had to dedicate an entire bare-metal server to one application, which in turn, frees up a ton of availability and usability out of one server. Instead of having one application per server, you could have five, or ten, or twenty, or however many applications that could fit on the server safely and without risking the server crashing due to too much memory and CPU consumption.
Containers took this concept a step further. Instead of virtualizing the hardware/bare metal, containers are virtualizing the operating system.
It works like this…
You have a server, maybe running Linux. Then, you have a container runtime running on Linux. Once the container runtime is running on Linux, you can start to deploy containers, which are lightweight versions of an operating system. Lightweight in this case means a container literally only has what it needs to run the application. For example, some container images don’t even have the ability to run `curl` or `wget`. They only have what’s needed.
Speaking of container images, a container image is the “golden image” of the container. You can then take that container image and deploy it across your container image into Pods.
Pods, in Kubernetes, run one or multiple containers (called sidecar containers). Containers and Pods share the same IP address, so when you’re communicating with a Pod, you’re communicating with the container running inside of the Pod.
The key piece to understanding containers is to know that it’s a lightweight version of an operating system that’s used to run a specific piece of an application, which are where microservices come into play.
When it comes to container security, you must ensure that:
- The application running inside of the container isn’t vulnerable.
- The container image only has exactly what it needs to run the app inside of the container image and nothing more.
- The Kubernetes configuration for the container has the proper securityContext settings to minimize or eliminate vulnerabilities such as privilege escalation or host file system and network access.
Understanding the overall complexity of not only Kubernetes but of networking and infrastructure is key to figuring out how to properly secure a Kubernetes cluster. Without understanding the underlying layers that make up Kubernetes, securing it will be out of reach. Without truly understanding networking and infrastructure at a deeper level, understanding the underlying Kubernetes layers will be out of reach. It’s a trickle effect from understanding operating systems, to understanding architecture, to understanding networking. Kubernetes may be “newish”, but how Kubernetes is architected and configured is not new.
Figuring out the security implications and architecture of Kubernetes isn’t easy, but it’s an absolute must if you plan on securing your environment.
The moral to the story? Understand how Kubernetes works underneath the hood and if you don’t already know infrastructure, learn it.
Learn more at www.ksoc.com.