Basic Kubernetes Security: A Hands-On Approach

In a continuation of the previous article, we explore the implementation of these different examples. Specifically, we will be covering workload separation, authentication, and other hardedning techniques. This article will have an example followed by a short explanation of what and why we should do that. We assume that the reader has a basic understanding of Kubernetes topics such as pods, service accounts, and secrets.

Read Only File-Systems

Read Only File System with Mounted Volume

spec:
    containers:
      - command: ["sleep"]
        args: ["999"]
        image: ubuntu:latest
        name: web
        securityContext:
            readOnlyRootFilesystem: true

This will ensure that the root file system is only readable. However, if you wanted to create an area where a program is able to write to, you can do the following:

Read Only File System with Mounted Volume

spec:
    containers:
      - command: ["sleep"]
        args: ["999"]
        image: ubuntu:latest
        name: web
        securityContext:
            readOnlyRootFilesystem: true
        volumeMounts:
          - mountPath: /path/to/write
            name: vol-1
    volumes:
      - emptyDir: {}
        name: vol-1

Where programs can utilize this mounted path to write during the container’s lifespan1.

Network Segmentation

Another layer of security you can add to your Kubernetes workload is controlling the boundaries of what pods can talk to each other. In order to accomplish this, you can use the built in NetworkPolicies. However, this may differ based on whatever networking interface you are using. For now, we will use built in Kubernetes objects.

We will first start at the namespace level. For example, if you had your API servers in a namespace called api and a front end server running in the namespace called fe, you can create those as follows: kubectl create namespace api and kubectl create namespace fe. We can achieve a similar result with a file:

Namespace file

apiVersion: v1
kind: Namespace
metadata:
    name: [api | fe]

and then running kubectl apply -f <namespace.yaml>. Now that we have our two namespaces, we can utilize a NetworkPolicy to restrict communications:

Network Policy Based on Namespace

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-deny-all-except-api
  namespace: frontend # Applies to frontend namespace
spec:
  podSelector: {}  # Applies to all pods in frontend namespace
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow traffic from pods within the same frontend namespace
  - from:
    - namespaceSelector:
        matchLabels:
          name: frontend
  egress:
  # Allow traffic to pods within the same frontend namespace
  - to:
    - namespaceSelector:
        matchLabels:
          name: frontend
  # Allow traffic to API namespace
  - to:
    - namespaceSelector:
        matchLabels:
          name: api
  # Allow DNS resolution
  - to: []
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

Note that this only handles any communication from frontend’s side. You would have to apply a similar set of rules to the api side as well to get the expected behavior. You are also able to blanket deny ingress or egress on a namespace too:

Deny All Ingress

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyType:
  - Ingress

However, that is not all. Say you need more fine grained access control such as restricting pod to pod communication, then you can use labels and NetworkPolicies to accomplish that. We assume that there are two pods, labelled with app=frontend and app=backend. Down below, we create the specification for a frontend pod to allow communication with a backend pod. You will do something similar in the inverse scenario too.

Pod-to-pod Communication

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-pod-to-pod-communication
  namespace: your-namespace
spec:
  podSelector:
    matchLabels:
      app: frontend  # This policy applies to pods with label app=frontend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend  # Allow ingress from pods with label app=backend
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: backend  # Allow egress to pods with label app=backend

Finally, if you wanted to add one more layer, you are able to restrict based on the port number too–only allowing communication on certain ports:

Network Policies with Ports

ingress:
- from:
  - podSelector:
      matchLabels:
        app: backend
  ports:
  - protocol: TCP
    port: 8080

A Primer into Roles

Roles can get fairly complicated with its functionality extending far beyond the traditional Kubernetes primitives. In this article, we only go over a short brief overview of Roles with a simple example.

Roles in Kubernetes can be broken down into two main categories (with definitions coming from the official documentation):

In this tutorial, we will only cover roles. Let us first create a basic role that allows anyone assuming that role to read secrets inside of Kubernetes:

Secret Reader Role

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

In order for a role to be assumed by an entity, we need to create something known as a RoleBinding (or a ClusterRoleBinding)2:

Role Binding

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "test-user" to read secrets in the "default" namespace.
kind: RoleBinding
metadata:
  name: read-secrets
  namespace: default
subjects:
- kind: User
  name: test-user # "name" is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  # "roleRef" specifies the binding to a Role / ClusterRole
  kind: Role #this must be Role or ClusterRole
  name: secret-reader # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io

A RoleBinding is a Kubernetes resource that connects (or “binds”) a Role or ClusterRole to specific users, groups, or service accounts within a particular namespace.In the example above, we want to bind the test-user permission to read a secret (the secret-reader role).

However, RoleBindings are not limited to just users, you are also able to bind them to workloads as well. Let us assume we have a pod that needs to read a secret. We can allow the pod to do so by first creating a service account:

Secret Reader Service Account

apiVersion: v1
kind: ServiceAccount
metadata:
  name: secret-reader-sa
  namespace: default

We can then create a RoleBinding for the service account:

Role Binding for Service Account

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows service account "secret-reader-sa to read secrets in the "default" namespace.
kind: RoleBinding
metadata:
  name: read-secrets
  namespace: default
subjects:
- kind: ServiceAccount
  name: secret-reader-sa # "name" is case sensitive
  namespace: default
roleRef:
  # "roleRef" specifies the binding to a Role / ClusterRole
  kind: Role #this must be Role or ClusterRole
  name: secret-reader # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io

Just like that, any pod that has assumed the service account is able to read that secret.

Conclusion

In this article, we covered topics such as creating read-only file systems in pods, segmenting networks, and basic roles. While this is only a brief overview, there remains so much more in securing and hardening Kubernetes workloads.