Rethinking AppSec: How DevOps, containers, and serverless are changing the rules
Application security is changing fast. In this Help Net Security interview, Loris Gutic, Global CISO at Bright, talks about what it takes to keep up. Gutic explains how DevOps, containers, and serverless tools are shaping security, and shares views on the biggest risks, important controls, and why AI must be used carefully.
How has your approach to application security evolved with the rise of DevOps, containers, and serverless architectures?
The tech industry has come a long way from legacy approach to application security, where security was more of a checkpoint, a box to tick at the end of development cycle, often superficially. Today, application security is widely recognized as something intrinsically tied to the development lifecycle, from the beginning to the runtime itself.
Thankfully, due to the innovation in that particular industry domain, we have moved away from the notion that any static security gates can provide sufficient protection over very dynamic systems. What’s done instead is ensuring that automation, continuous validation, and context-aware testing with actionable recommendations and even automatic remediation are ingrained in the process.
Application security and developers have not always been on friendly terms, but the practice indicates that innovative security solutions are bridging the gaps, bringing developers and security closer together in a seamless fashion, with security no longer being a hurdle in developers’ daily work. Quite the contrary – security is nested in CI/CD pipelines, it’s accessible, non-obstructive, and it’s gone beyond scanning for waves and waves of false-positive vulnerabilities. It’s become, and is poised to remain, about empowering developers to fix issues early, in context, and without affecting delivery and its velocity.
As for containers, they contribute to the attack surface extending to the entire ecosystem – misconfigured images, exposed registries, insecure base layers, they all represent risks that we need to be mindful of and make certain they’re covered with the appropriate security measures.
With serverless, you’re no longer dealing with long-lived servers or predictable runtime behavior. Instead, you’re working with short-lived functions triggered by events, often running in unpredictable sequences. This makes it much harder to apply traditional security tools that expect stable infrastructure. What’s needed instead is a dynamic approach, one that understands how these functions behave in real time, within the context of the application’s actual workflows.
Simply put, the application is no longer just about the code. It’s an intricate system of interconnected parts, and it’s insufficient to address only code; the entire environment needs to be taken into account, the way it runs, and the way it’s deployed. That’s where business logic testing, often overlooked, can contribute significantly to the overall efforts of maintaining a secure application development process.
What specific security controls do you consider non-negotiable for application environments, regardless of stack or infrastructure?
Well, naturally, there are baselines irrespective of the architecture. Without placing a particular emphasis on the order of importance, we could examine the following:
1. Authentication and authorization hygiene need to be established, respected, implemented, and monitored. Every component, every user, every service should operate on the least-privilege principle, with strong secrets management in place and with strict elimination of hard-coded credentials in codebases. The latter in particular mirrors the Miranda warning – anything you say (or in this case, hard-code) can and will be used against you in the process of merciless cyberattack.
2. Input validation and output encoding are also permanent considerations. Regardless of the stack, SQLi, XSS and related issues are ever-present, which is why every application should ensure data sanitization is embedded as close to the source as possible.
3. Next, secure CI/CD pipelines inherently contain the risk of compromise that needs to be prevented. Therefore, signing artifacts, controlling build environments, and maintaining a strict change control process are indispensable.
4. Runtime observability is also a critical area that requires attention. No matter if it relates to API traffic, function-level execution traces, or anomaly detection, we must ensure the ability to detect any out-of-the-ordinary behavior in production environments. Monitoring and analyzing logs and findings, paired with paranoid excesses of false positive alerts, can prove very tiresome, and thus be insufficiently regarded; that’s where proper configurations can reduce alert fatigue and maintain vigilance at the appropriate levels.
5. Last, but most certainly not the least, automated testing and validation in pre-production and production must go beyond superficial, including dynamic, logic and behavioral testing in realistic conditions. The risk of being limited to static and superficial testing is quite significant, which is why it’s of grave importance to implement automated testing mechanisms that will be, at the same time, efficient, actionable, informative, and developer-friendly.
What are the most critical threats you’re seeing in application environments? Are they more supply chain-oriented, identity-focused, or something else?
Our industry has gone beyond the threats of isolated bugs, vulnerabilities, or misconfigurations. The presented issues are systemic. Yes, the supply chain has become quite the dominant vector, either through compromised third-party packages, malicious dependencies, or tainted CI/CD environment; attackers are targeting the trust model itself, and with no small amount of success.
Another considerate battleground is identity. With reliance on distributed microservices, each component acts as both client and server, so misconfigured identity providers or weak token validation logic make room for lateral movement and exponentially increased attack opportunities. Without naming names, there are sufficient amounts of cases illustrating how breaches can occur from token forgery or authorization header manipulations.
Additional headaches are exposed APIs and shadow services. Developers create new endpoints, and due to the fast pace of the process, they can easily escape scrutiny, further emphasizing the importance of continuous discovery and dynamic testing that will “catch” those endpoints and ensure they’re covered in securing the development process.
When building a security strategy for application environments, what guiding frameworks or models do you prioritize?
We prioritize blending NIST 800-53, OWASP ASVS, and MITRE ATT&CK for enterprise-level threat modeling. For secure SDLC integration, we draw heavily from OWASP SAMM to evaluate maturity across development, testing, and deployment practices.
That said, it’s important to abandon the legacy perspective of keeping security a mere checkpoint at the end of the process. Thankfully, that’s already an approach that’s being largely abandoned, so the focus is on shifting left and doing it smartly. Shifting left itself is not sufficient – it’s about having the right controls at the right depth and ensuring they provide actionable, contextual feedback.
Security never ends. It’s never completed. There is no point in creating any final milestones – there is no finality. There is only monitoring, watchfulness, vigilance, and adapting to the circumstances. And, often overlooked – resilience. Issues will arise, no matter the framework. One of the questions that companies need to ask themselves is, “Is our environment resilient enough to weather the issues?”
Are you leveraging AI or ML tools to secure application environments? If so, in what capacity?
Yes, but not recklessly or blindly, and we do it with strict guardrails. There’s no question that AI and ML have utility in contributing to security operations, but they’re not, and under no circumstances should they be autonomous decision-makers. AI should enhance precision and efficiency, not introduce opacity or risk. That’s why we’ve built (and will soon be marketing!) a proprietary approach to AI-enhanced development workflows that are fully embedded in our broader security governance.
Specifically, we use AI to assist in three main areas:
- Behavioral baselining and anomaly detection – AI models help identify deviations in application behavior across runtime environments, which can flag abuse patterns like credential stuffing, broken object-level authorization, or logic misuses that are typically hard to codify in static rules.
- Contextual risk scoring – Our internal tools leverage AI to parse commit messages, changelogs, and even test coverage deltas to prioritize which changes may pose higher risk. This feeds directly into how and where we apply targeted security validation.
- Developer efficiency augmentation – We offer in-context remediation guidance that draws on historical patterns from previous vulnerabilities fixed within our environments. This is not just some generic “copilot” tooling; it’s context-aware and aligned with our security standards.
However, we maintain strict control and visibility over any AI-driven or AI-assisted activity. Every output – whether it’s a suggested fix, a risk score, or an anomaly alert – passes through defined review and validation steps. No AI-generated result is acted upon automatically or without traceability.
We also apply rigorous internal gating policies for all AI-enhanced development efforts. This includes code provenance checks, reproducibility requirements, and human-in-the-loop controls before anything reaches production. This approach gives us the best of both worlds: the speed and breadth of AI, and the assurance of human oversight grounded in real-world attack scenarios. It should never be about blindly automating trust – it’s about engineering it in a very intentional, very articulate manner, while keeping in mind AI limitations, as well as the necessity of human supervision.