Traditional methods of software security are not a good fit for Kubernetes: a renewed set of security implementations are required to make it less vulnerable.
What’s different about Kubernetes security?
This article walks through several key ideas that comprise software security and highlights why they’re a poor fit for Kubernetes-based infrastructure. The second half discusses kube-idiomatic approaches to security and ideas about reducing vulnerabilities.
But first, some history (because it repeats itself).
Securing computing devices is an age-old problem. It reinvented itself in the era of networked devices, and things have never been the same. The large-scale proliferation of the internet has introduced several new dimensions to the problem, and solutions have adapted well. Each time the technology community changes the operating paradigm, their security brethren have risen to the challenge.
Virtual machines are considered a rather important milestone in the history of computing. Not just for the sheer tectonic shift they introduced, but in terms of how they changed the way computers and computing were thought about. An abstraction was introduced which produced analogues to actual computing devices. The abstraction obviously extended to all components namely compute, storage, and networking. The security challenge at the time became rather complex, because how do you secure something virtual?
The answer lay in adapting to the areas of attack surfaces, system-level security, and using numerous other innovative means to safeguard virtualized infrastructure – which the security community accomplished with great success. The same kind of paradigm shift is now observed with the introduction and planet-scale adoption of containers and consequently Kubernetes
Large parts of the application stack are abstracted with the use of containers. Traditional application architecture – typically labeled “monolithic” – provided for an efficient method at lower scales of operation. Its limitations became very apparent as applications began to grow in scope and their operations increasingly added concurrency. Tight coupling between components were symptomatic of this architecture, which did not promote any code reuse, allowed bloat, and did not provide sufficient agility. The community then shifted towards the microservices architecture. Deploying applications using containers was a natural fit, because they could all share a fundamental denominator for their technology stack.
The use of containers (and Kubernetes) alters the principles of application development and deployment significantly. Kubernetes applies a high-level abstraction to the entire lifecycle of an application. A workload, the Kube-equivalent of a live application, is managed entirely from creation to shutdown (including subsequent restarts). Several instances of corrective action such as restarting unresponsive nodes, crashed pods, and others happen automatically when using the Kubernetes orchestrator.
With Kubernetes in place, security teams are left with limited visibility into the impact each change has. Commits, which are pushed automatically through a CI/CD system, help engineering teams achieve the velocity they need, but at the cost of being able to introduce change management and compliance at each stage.
The next area where Kubernetes breaks traditional models of security is the distributed and ephemeral nature of the workloads themselves. Kubernetes, which orchestrates containers using nodes, pods, and clusters, makes it impossible to introduce cardinal mapping between workloads, services, and client-requests.
Dynamic scheduling is an imperative attribute of Kubernetes-based systems. There is no static provisioning or file system partition when working with it. This makes it more complex when designing secure systems.
Solving the problem
The solution – to the problems illustrated above and several others that arise in the Kubernetes space – can be segmented into two primary areas:
The stage where immutable containers – which consist of host OS components, libraries, and the actual workloads – are exported is classified as the “Build” time. There are several ways in which to bake in security best practices during build time.
The first principle of improving security at this stage is hardening the host OS. Admission controls are also an important piece of the puzzle, as are exposing service endpoints. An increasingly popular tool/technique to achieve build-time security is the use of buildpacks. All these fundamental security policies and any best practices can be implemented easily using buildpacks. They can then be consumed repeatedly.
Open-source supply chain threats can have a major impact when deploying applications to Kubernetes. Due to the largely open-source nature of the stack, it can be prone to vulnerabilities at any level – as has been stated previously. Software Bill of Materials (SBOMs) play a major role in providing the transparency needed to secure the supply chains that lead to applications operating in production.
Kubernetes employs numerous passcodes and certificates (collectively termed secrets) in order to facilitate secure communication between its components. At regular intervals, these secrets are replaced with new ones. The individual components are mandated to use these new secrets. Securing and scheduling these secrets are an important part of securing Kubernetes deployments.
This typically refers to the set of security practices that are employed when workloads are operational in production (or in staging). They span hosts, workloads, the networked interfaces that connect these components, and services that interact with each other. They concern themselves with vulnerabilities that may arise when the apps are executed – or run.
There are several areas which provide the necessary hooks to improve runtime security when working with Kubernetes. Here are some of the best approaches to bolstering runtime security:
- Isolation of workloads is important for providing a foundation for the self-healing nature of Kubernetes. Consequently, it helps prevent any problems that might arise from affected workloads from crossing over to others.
- Kubernetes has a robust RBAC API that can be used to regulate and control access to all kinds of resources. Dynamically configurable policies that can be adjusted programmatically provide for an excellent model to create checks when vulnerabilities are identified.
- Due to the CRD-based architecture, services can be (re)configured as needed. In addition, the use of direct pod connections, node ports, and advertising service IPs can greatly improve security of Kubernetes based services. Increasingly, eBPF-based security is rising as the gold-standard when securing services.
- Activity logs for DNS, Kubernetes activity, and application traffic are the three main sources of logs which capture various events. Auditing and compliance goals that – for the emerging and enterprise alike – using this data as the basis will ensure heightened security for the system.
Are there standards governing Kubernetes security?
Several strategies for securing Kubernetes originate from publicly available frameworks and models. The Microsoft team has an excellent resource that can serve as a starting point for investigating the right Kubernetes strategies to use for any team.
The National Institute of Standards and Technology (NIST) publishes a framework for container security. It is a focused publication meant to manage risk around application containers and container management. It also lays out security best practices for container images, host operating systems, container registries, container orchestrators (such as Kubernetes), and other components that are part of the stack.
Another popular security framework is the MITRE ATT&CK Framework. ATT&CK stands for Adversarial Tactics, Techniques & Common Knowledge. It advocates improving security through automated security testing in a continuous manner. It outlines various vulnerabilities. An important reason this framework is popular is that it curates a list of techniques and tactics commonly used during cyber security attacks in order to prepare infrastructure against attacks.
There are also Kubernetes-specific security frameworks. One example is the Kubernetes Benchmark published by the Center for Internet Studies (CIS). It offers detailed guidelines for hardening clusters, along with appropriate settings that can help in securing cluster components through the right configuration. This benchmark also prescribes alerting based on compliance when used in enterprise-wide contexts.
Another notable security framework native to the Kubernetes ecosystem is the PCI DSS (Payment Card Industry Data Security Standard) Compliance. Although it is meant for a single vertical, the technical and operational requirements prescribed are applicable to any Kubernetes distribution. This standard places emphasis on data protection, privacy, and recommends a security posture that cuts across the entire container lifespan.
The evolving nature of cloud security
Cloud-based applications are susceptible to being infiltrated. Application developers and platform operators have no choice but to stay alert and remain focused on keeping their software secure. So much that securing cloud native application stacks is a discipline of its own – DevSecOps.
Kubernetes security is built on the model of securing in layers – specifically: cloud infrastructure, clusters, containers, and code. The underlying approach to Kubernetes security is one of defense in depth. This is a perfect complement to the layered approach that can ultimately provide redundancy against exploits.