Understanding a Containers Attack Surface in Kubernetes
Kubernetes is a complex orchestration tool that requires multiple teams to ensure its stability and usability. In this section, I want to outline the core features that an average developer may run into and inform on the security significance of each aspect. We will move through an average deployment YAML file using our simple service application and set it up as if we were going to hand it off to an operations team. The outline is the following.
- Environment variables
- Stateful or stateless application
- Ports or connections
In the future security lessons, we will explore more Kubernetes security objects and build complete hardened applications that can be deployed and monitored.
The Kubernetes Attack Surface
Kubernetes is different than running an application in a typical containerized environment. Since Kubernetes is designed as an orchestration tool, every issue you have with your container/pod/deployment will become an issue at scale and an issue for other workloads in the environment.
Please take a look at the graphic below. It does a good job outlining the core objects in Kubernetes that will need to be hardened and properly configured.
Kubernetes Namespaces provide scoping for cluster objects, allowing fine-grained cluster object management. Kubernetes Role-based Access Control (RBAC) rules for most resource types apply at the namespace level. Controls like Kubernetes Network Policies and many add-on tools and frameworks like service meshes are often scoped to the namespace level.
Makes sure to scope your application to a single namespace. This configuration will provide a greater opportunity to implement excellent security protocols. Since network policies and RBAC roles can be easily applied and finely tuned at the namespaces level, this enables more scalable security. Avoid using the default namespace in any cluster outside of a development cluster. All applications require a specific namespace and should not be deployed into default due to ease of deployment.
Since we have been acting as if we will hand off our application to an operations team, let’s keep that theme going and declare all of this information in the deployment YAML.
!!! Note We will make all of the changes to the YAML in the last stage. There is no need to copy or past anywhere at this time.
apiVersion: apps/v1 kind: Deployment metadata: name: simpleservice-3.9 namespace: simpleservice labels: app: simpleservice owner: $YOURLABEL <- Add an identifier
It is also important to add identifying characteristics. This not only helps operations teams but can help security teams track deployments of interest and help with security incidents reports and their information gathering.
Next, we are going to list out the container specs. We are starting the image that we want to configure. In this case, we want to pull our updated simple service application into our Kubernetes cluster from a verified hub location.
!!! Note We still need to push our image (Don't worry, we will.)
apiVersion: apps/v1 kind: Deployment metadata: name: simpleservice-3.9 namespace: simpleservice labels: app: simpleservice owner: $YOURLABEL <- Add an identifier metadata: spec: replicas: 3 selector: matchLabels: app: simpleservice template: metadata: labels: app: simpleservice spec: containers: - name: simpleservice-main image: <YOURREPOSITORY>:<YOURTAG>
Your organization or team should have a set tag structure and secure and verifiable registry. This allows your administrators to shut down the cluster's ability to pull images from open repositories.
If not, hackers may try to download preconfigured malicious containers into your Kubernetes clusters. By declaring where the resources you need are located, the security and operations teams can block other registries.
Managing environment variables is tricky.
While it is helpful to put them in the Kubernetes YAML as a declarative way of managing them, this is annoying to alter throughout the pipeline and runs the risk of storing valuable information in plain text. It is much more secure to consume environment variables as secrets at the start of the container. This can be done using a secret store service like Hashicorp Vault or Kubernetes Secrets.
For now, we do not need to alter any of the four environment variables from our simple service example so that we can move on. In the future, we will showcase how to mount an environment variable into your containers/pods properly.
Understanding what access control settings is vital to the health and safety of your application and your Kubernetes environment. We should also always follow the principle of least privilege is necessary when sharing a vast container environment with other applications.
We want to avoid a situation where a malicious actor can exploit our application. An example of this is if our application has admin permissions on the host that we did not need. To stop this from being exploited, we should utilize the pod security context and ensure that we drop all Linux capabilities that we do not need.
IN our example, we do not require any Linux capabilities, so we can drop all capabilities and set privilege escalation to false.
So far, we have the following:
apiVersion: apps/v1 kind: Deployment metadata: name: simpleservice-3.9 namespace: simpleservice labels: app: simpleservice owner: $YOURLABEL <- Add an identifier metadata: spec: replicas: 3 selector: matchLabels: app: simpleservice template: metadata: labels: app: simpleservice spec: containers: - name: simpleservice-main image: $YOURREPOSITORY:$YOURTAG volumeMounts: securityContext: allowPrivilegeEscalation: false capabilities: drop: -all
We have disallowed privilege escalation and dropped all Linux capabilities for our future spawned application.