Containers are just processes: The illusion of namespace security
In the early days of commercial open source, major vendors cast doubt on its security, claiming transparency was a flaw. In fact, that openness fueled strong communities and faster security improvements, making OSS often more secure than proprietary code.
Today, a new kind of misinformation has emerged, the opposite of FUD: it downplays real open source security risks that should raise concern.
The biggest security fallacy today is that Linux namespaces are security boundaries.
From single servers to shared kernels
How did we get here?
Security boundaries have been in major flux ever since the evolution from single servers to clusters of Linux machines sharing workloads. Kubernetes has become the de facto cloud operating model, and modern approaches to platform engineering are patterned around application instances sharing underlying infrastructure (aka, “multi-tenancy”).
But the security challenges at play today pre-date Kubernetes by decades.
In the 90s when the internet was new, hosting websites was one of the first big use cases for distributed systems. CGI scripts – a standard for communication between web servers and external databases – quickly became the extreme use case for distributed systems security.
Running arbitrary code in shared hosting environments was immediately recognized as a security anti-pattern that exposed whole classes of new risks. So that led hosting providers to invest in technologies like Linux vServer, FreeBSD jail, and novel approaches to reducing access to host systems that the CGI scripts could have.
In a way, that’s where containers began. The industry wasn’t thinking about these technologies in terms of software delivery yet, but the underlying concept of using namespaces to “jail” processes inside of chroots comes from this basic day one security problem for web hosts running CGI scripts and attempts to create isolation boundaries.
With the launch of EC2 in 2004 and the first big milestones toward cloud-native computing, the approach to containers and jailing workloads waned a bit, as the market favored EC2 images, and virtual machines and appliance application delivery became standard fare for x86, three-tier architecture.
When Docker arrived, it disrupted the state of the art in application delivery, giving developer productivity a major boost, and dismantling the unfriendliness of developers having to “ask IT” for compute resources by introducing DevOps patterns.
But with the move to using Docker as the software delivery method, the industry got back to this original security problem of multi-tenancy that the web hosts faced back in the 90s. The question of how to let applications and developers share resources without everyone stepping on each other’s toes was now not restricted to just website hosts, but to the modern, containerized cloud-paradigm that is effectively running the entire internet and critical business systems today.
Namespaces were never meant to be security boundaries
With the rise of containerization, namespaces became the primary mechanism for providing process and resource isolation. Namespaces create the illusion of separation by limiting what a process can see or access, such as its view of the file system, networking stack, or user IDs.
This illusion, while effective for resource management and lightweight multi-tenancy, has often been misinterpreted as a security boundary, leading to a dangerous overestimation of how secure containerized environments truly are.
In reality, namespaces in Linux are simply mechanisms to partition kernel resources for processes. They do not enforce true security separation. All containers share the same underlying Linux kernel, and while namespaces can limit access to certain kernel resources, they do not eliminate the possibility of processes escaping their isolated context. For example, vulnerabilities in the kernel can be exploited by a process within a namespace to affect other containers or even the host itself.
Namespaces are a convenience abstraction for developers, not a fortress wall. Containers, being merely processes with different namespaces, inherit this weakness, thus making the assumption that they’re isolated by a security boundary both technically incorrect and dangerously misleading.
When all containers run on a single kernel, any vulnerability in that kernel becomes a single point of failure for the entire system. This is like giving multiple tenants keys to different rooms in the same house while leaving all internal doors unlockable from the inside.
You can’t paper over the problem with virtualization
We’ve come full circle from the FUD era of early open source, where security skepticism was used to try to stifle adoption, to a present day where vendors are papering over security’s hardest problem.
Most container security strategies today build walls on top of an already-cracked foundation. Solving isolation at the namespace or syscall filtering layer is like patching drywall over a structural flaw. What’s needed is a fundamental rethinking. We need an approach that addresses the issue at the lowest layer of the stack.
It’s become fashionable marketing for security vendors to use the term “virtual” as a catch-all used to suggest security isolation, whether referring to virtual machines or so-called “virtual workloads” inside containers. But the semantics of virtualization cannot substitute for its architectural guarantees.
A virtual machine provides strong isolation because it abstracts the entire hardware stack, including the kernel. Containers, by contrast, are just processes running on the same host kernel – no matter how many namespace wrappers are layered on. Labeling something “virtual” doesn’t make it secure, especially when it rides atop a shared kernel.
True isolation is the most important architecture battle you need to win for your organization’s multi-tenancy security future. If you’ve been among those sleepwalkers who believe namespace-based security isolation is real, it’s time to wake up.