Blind spots and how to see them: Observability in a serverless environment

Companies embracing DevOps and cloud to fuel digital transformation are increasingly turning to serverless computing, also known as ‘functions-as-a-service’ (FaaS), to shift resource-intensive operational duties away from developers to cloud providers. According to the Cloud Native Computing Foundation, the use of serverless technology is surging, up 22 percent since December 2017, with 26 percent of organizations planning to deploy within the next 12 to 18 months to maximize operational efficiencies and enable application developers to focus on their core job functions – writing code.

Yet relinquishing infrastructure control to the provider creates a new set of risks for both development and security teams, including several major blind spots that traditional security toolsets are not able to capture:

Ownership confusion

Many organizations run serverless-based applications in conjunction with other types of workloads, like containers or virtual machines. Each added element introduces a new layer of complexity to the environment. Additionally, since serverless functions are constantly processing data flowing from numerous sources – from APIs, cloud storage to message queues, to name just a few – organizations can quickly lose track of who is responsible for securing which of these many moving parts. And as input sources and data streams multiply and the environment becomes more complicated by the day, so too does the overall attack surface.

In a serverless environment, while the infrastructure attack surface is reduced, the application attack surface remains as vulnerable as applications deployed on your own VMs or containers. Yet as a fairly new technology, many development and security teams do not fully understand the unique security risks serverless architectures present – let alone how to adequately control and prevent them.

Over-privileged functions and users abound

In serverless environments, each application is comprised of many specific functions. Each of these functions requires a level of access to perform what it needs to do. All too often, however, functions are assigned full permissions so as not to slow down workflow. This introduces significant security risk, as unauthenticated internal users and outside attackers may be able to compromise functions with elevated access, manipulate application flow and take unauthorized actions. Establishing function-level segmentation with strong identity access management (IAM) policies is critical.

If the serverless environment requires access to a virtual private cloud (VPC), it’s also important to enforce least privilege principles to ensure users have the minimal level of access necessary to perform their intended functions. A set-it-and-forget-it approach is sure to fail. Once these security policies are solidly in place, organizations must continuously monitor functions as they are deployed to quickly identify suspicious in- or out-bound traffic between networks and other anomalies to protect against advanced attacks that surpass traditional protection layers.


Most applications require secrets – API keys, access credentials, tokens, passwords, etc. It’s a common (and dangerous) practice for developers to simply store these secrets and access keys in plain text configuration files, or in environment variables. This is low-hanging fruit for savvy attackers. To avoid these risks and stay in compliance, all of the credentials within function codes should be stored in-memory, and accessed through a secret store. If for some reason the function does require the use of a long-lived secret, secrets should be encrypted. The cloud provider’s key management service can be leveraged to manage, maintain and retrieve these secrets automatically.

An incomplete picture

Since serverless is typically only part of an organization’s unique cloud strategy, security teams often struggle to maintain a full and accurate view of their security posture across their public and private cloud data center meshes – from serverless and containers to third-party services. That’s because each workload provider follows its own security frameworks, making it nearly impossible for organizations to manage and control each piece of the puzzle together. In such dynamic and disparate environments, organizations need a more practical, uniform and automated way to enforce and manage security policies and efficiently control various cloud-native services, infrastructure and environments.

The third-party problem

Serverless functions often rely on third-party services and software, such as APIs, open-source packages and libraries. Without an intelligent, automated way to discover, continuously scrutinize and control these third-party services, organizations open the door to potential vulnerabilities that can pave the way for exploit and data loss

Legacy and shared security tools have limits

Legacy security tools designed for data centers compound this serverless security and observability dilemma. Traditional firewall and endpoint protection tools and even cloud security groups lack the necessary app-awareness, fine-grained controls and advanced anomaly detection mechanisms necessary to detect and prevent advanced attacks. Further, cloud providers offer limited threat detection coverage since they are blind to network-based attacks such as DNS exfiltration, spoofing and lateral movement. As such, enterprises need the extra layer of network protection not currently made available by the leading providers such as AWS, Google and Azure.

While it’s tempting to equate serverless with less security responsibility for your organization, the shared responsibility model still holds true. But this doesn’t mean that organizations must trade speed and agility for security. By following best practices for securing serverless environments and utilizing cloud-native tools that simplify and unify cloud operations protection, organizations can have it all as they continue their digital transformation journey with confidence.

Don't miss