Getting Started with Kubernetes Audit Logging
Regardless of where your Kubernetes cluster is running, whether it’s in the cloud with a Managed Kubernetes Service, on-prem with something like Kubeadm, or a raw Kubernetes cluster, you must ensure that what’s happening inside of your cluster is known by you and the appropriate parties (like the security team).
By default, Kubernetes has the ability to audit every action that occurs on the API, and in this blog post, you’ll learn all about it.
What Is Audit Logging
By definition and per the kubernetes.io website, audit logging “provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself”.
What this essentially means is the audit logger can capture every single piece of information that occurs on the Kubernetes API. If an engineer runs kubectl get pods, the audit logger can pick it up. If a new service account is created, the audit logger can pick it up.
The goal of audit logging is to understand:
- What happened
- Why it happened
- What the destination was
- Where it came from
- Who did it
- Why they did it
Audit logging, however, is not on by default. Because of that, you can retrieve all of the audit logs you’d like, but you must set up a policy configuration.
Example Policy Configuration
Let’s take a look at two example policy configurations.
The first is rather verbose and contains several configurations including:
- Pod logs and status
- ConfigMaps
- Secrets
and even specific policies that state what the engineer may not want to consume.
This type of policy is getting extremely granular in a way that may be needed for an environment. The good thing about a Kubernetes audit policy is you can dive as deep as possible in terms of what you want to collect data for and what you don’t want to collect data for. This can always be updated as well, so if you decide to not consume logs for one policy, but want to change it later, you absolutely can.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: None
nonResourceURLs:
- /healthz*
- /logs
- /metrics
- /swagger*
- /version
- level: Metadata
omitStages:
- RequestReceived
resources:
- group: authentication.k8s.io
resources:
- tokenreviews
- level: RequestResponse
omitStages:
- RequestReceived
resources:
- group: authorization.k8s.io
resources:
- subjectaccessreviews
- level: RequestResponse
omitStages:
- RequestReceived
resources:
- group: ''
resources:
- pods
verbs:
- create
- patch
- update
- delete
- level: Metadata
omitStages:
- RequestReceived
The next audit policy is far simpler to read, yet far more complex. Even though the YAML is shorter, what the policy below states is “consume everything that’s metadata”. As you can imagine, since metadata is data about data, there are going to be a lot of logs that are consumed here.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: None
Both of these work just fine, but it’ll ultimately be up to the security team and the platform team to figure out just how many logs they want to consume and what’s truly needed.
Checking Out Audit Logs
In many Managed Kubernetes Service offerings, you can take a look at Audit Logs right after the installation of the Kubernetes cluster. On-prem, you may need to do a few more configurations to ensure that this can occur.
Typically on-prem, you must configure and enable the Audit Logs during the installation of the Kubernetes cluster.
Below is an example command that you would need to run to point the Kubernetes cluster to the audit policy and point to where the logs should be stored.
kube-apiserver --audit-log-path=/var/log/kubernetes/apiserver/audit.log \ --audit-policy-file=/etc/kubernetes/audit-policies/name_of_policy.yaml
If you wanted to run the audit logging on Minikube for example, it would look something like this:
minikube start \
--extra-config=apiserver.audit-policy-file=/etc/audit.yaml \
--extra-config=apiserver.audit-log-path=-
The Minikube command tells Minikube to:
- Start the cluster
- Look at the audit.yaml policy
- Store the logs locally
Because there are many different ways to configure a Kubernetes cluster, there isn’t an exact method/command to run. It’s going to vary based on where you’re installing a Kubernetes cluster. However, the few things that you’ll definitely need are:
- An audit policy
- A place to store the logs
Thinking about audit logs from a cloud perspective, the good thing is that a lot of the heavy lifting is done for you.
If you’re running a Kubernetes cluster in Azure using Azure Kubernetes Service (AKS), not only are the audit logs turned on for you automatically, but container logging is enabled so you can start querying audit logs right away.
In AWS if you’re using EKS, it’s a bit different. When you’re creating an EKS cluster, you’ll have to manually set the option to turn on Audit logging. This isn’t necessarily a bad thing, and you should definitely turn on this configuration.
Remember, even if you aren’t managing the control plane, you should still understand what’s happening underneath the hood. Just because you aren’t managing the control plane doesn’t mean that attackers can’t breach your system.
Wrapping Up
Audit logging is, what appears to be, not a huge topic that’s at the forefront of everyone’s mind. Engineers are primarily concerned with getting the Kubernetes cluster up and running. Moving forward, practitioners must turn their focus to a more security-centric approach. One of the first and most passive steps that an engineer can take for a security-centric approach is turning on audit logs.