DevOps and security: How to make disjointed security and DevOps teams work effectively

As organizations build their “software factories”, leveraging the latest DevOps organizational models and CD/CI techniques to get applications out quickly, they still find that contention comes from within. DevOps, a combination of application developers and business operations resources are teams, depending upon the organization, most often found within the IT organization.

Business owners “contract for these resources” in order to get a working application that meet’s their business requirements. Modern tool sets and application development techniques speed development and the ability to rapidly iterate the application with continuous improvements. This allows applications to be brought into test and production quickly and with rapid iteration improvements.

The problem with the DevOps methodology has been the fact that InfoSec is typically not part of the DevOps team and thus exposing a risk is this critical function could be left out entirely of application development. This happened in the ICS industry, where several industrial control applications were deployed and not properly secured. Just one example, is a large European industrial control systems vendor was the victim of a zero-day malware attack that allowed remote access to Trojans that in one manifestation shut down a middle eastern power plant.

At the root of the problem was that the security team was not aware that new applications were being deployed and vulnerabilities were found the hard way – by customers out in the field.

In reaction to this, the industry is abuzz with the need to perform DevSecOps. Where security teams are brought into the team and made a part of the ops process. On the surface, this sounds good – yet issues arise because of a traditional difference in focus for the constituencies of these mash-up teams.

It often comes down to how these team members are wired. DevOps teams need to be creative (right brain) to develop applications that can sustain the business. Security, on the other hand, tends to be analytical and driven by processes (left brain) and procedures. The two groups are quite different in their business objectives/approaches, and this polarity makes it difficult to work synergistically.

The mission defines the role and the personality. DevOps designs and builds applications to meet business requirements – they embrace risk for the benefit brought by some new functionality. Conversely, security pros have the role of securing the business and seek to minimize or eliminate risk. This is the fundamental issue that puts DevOps and security at odds.

The challenge as Forrester points out in their “Cloud Workload Security Solutions Report,” is that the security team still needs to test and verify all applications in a more traditional gate process. For instance, quick iterations may each have to be checked for vulnerabilities and policy checks before they can be run against production data. This slows the process down – especially if issues are found. This is where the real organizational stress test comes – do the business owners setting the application requirements properly emphasize security and minimizing risk as a requirement vs. getting the application out. If they don’t, bad things can happen that cost a lot more to fix after the fact, as pointed out in the example above.

What’s the answer?

How do we get an effective and efficient software factory running that turns out applications out on time and on budget while also minimizing security risk?

First, some proper perspective – truly solving this likely requires more than an intervention by human resources and a “kumbaya” session to build an effective secure DevOps “machine”. Infusing security into the DevOps processes has significant challenges. For instance, security pros must keep pace with the fast-paced DevOps approach in setting up the security configuration of physical, virtual and container based systems- no matter where they are spawned before integration work can safely take place. Then applying the appropriate security policies on separating, segmenting and each of these operations to ensure only the proper connectivity will take place. This keeps from creating holes in the security fabric of the network, as new vulnerable applications are integrated in for testing under load with the potential of unwittingly provide inappropriate access to production data.

With proper processes and policies are defined – then there’s the pressure to get the work done quickly and as effectively as possible. This argues for automation of these security processes – so that they can be performed quickly without being subject to the possibility of human error.

Yet, where and how this gets done is often a challenge. Do we try to do this work within the application? Or do we do some of this independently in the network or is it some combination? And do these processes become automated to set the proper protections?

One popular answer is to shift left and have developers take on adding security protections into the applications they produce. Gartner referenced the need for Secure DevOps at their sessions at their “Security and Risk Management Summit” while the specifics were scarce what was implied is that in order to better secure their code the proper tools need to be made available to the developers. Right now this is being answered by doing automated or triggered vulnerability scans but the need is much more in-depth than that. That’s just one step out of four!

Four steps to proper application security through its life span

1. Verify the application composition comes from trusted sources.
2. Secure the code from vulnerabilities – as built, as deployed and throughout its life.
3. Govern what connects to the application and what it can connect to – esp. in its development phase.
4. Secure the application and the data it produces, as well as the data it accesses.

Verifying the composition of an application comes down to controlling what source code libraries can be used when building. For example, adding nginx to the application is great, as its function set is already well proven in the industry and it’s as simple as connecting it in with other functions that make a working application. However, you want to be very sure that it’s a sanctioned version – not the latest unverified version found out on GitHub.

Securing code from vulnerabilities includes anything from a set of process done by the security team and/or developers to tying in software routines that run within the application that perform a function on a recurring basis and in an automated fashion.

Governing application access and connections is about applying policy: First it covers what types and levels of access authentication are allowed. Ideally this is provided within the application or in conjunction with 3rd party directories and possibly multi-factor authentication applications allowing a verified humans access to an application. Governing application level connection determines what in the network should be allowed to send network data to the application and/or its host in the first place. Often known as micro-segmentation it can be done at the underlying host level, or out in the network.

Securing the application, including its output, protects it from threats that have gotten into the network including those threats have infiltrated the underlying systems. It also protects the data that is sent for storage and to back up systems. Another dimension is properly encrypting the data output in motion, as well as at rest according to specified policies. Likewise, the application itself may need access to protected data and should only access that data under proper conditions specified by a policy.

To be clear, if protected data needs to be accessed, the application needs to have proper access and proper crypto functions either built in of provided to it by either the IT or the network infrastructure – again depending on how and where the application is running and where its accessing data from. This is critical throughout the applications life span, yet these considerations are often absent during its development due to a lack of convenience. If steps 1-3 have not been fully enabled and performed this is where the application may be at its most vulnerable.

Typically, all four steps are required and applied at least to some extent, the degree to which should be determined by the level of risk set by the corporation for a given application and its use case. Sounds like a lot for developers to try and figure out – so lets make it easier on them.

Factory implementation

Now that we have determined the proper protections and policies that need to be enforced, we can turn back to our “software factory” with the final set of security requirements that need to be fulfilled. As with any proper factory, automation is critical to ensure the proper execution of these steps in the most effective and efficient manner. Ironically – this is also the answer to creating harmony between application developers and InfoSec teams. For instance, in the case of protecting the application and/or the data produced – if the developers are able to build in configurable levels of protection, including libraries that can run crypto, or at the very least a means to connect to a crypto service on the host or say in a local hardware off–load function, the work of the development team is done. They have provided the proper security functions and the instrumentation to control them.

With the groundwork laid, the InfoSec team can later come in to provision and properly configure these functions at any time- depending on the use case situation. The same goes for controlling connections to the application. Mechanisms that control what data can be accessed under what conditions by these applications can be added either to the applications or to the devices and networks on which they live.

Aggravation and angst in our software factory can be relieved as developers have set work to accomplish and can add and connect in what amounts to a few additional libraries into their applications – while InfoSec teams can independently assess risk and properly configure the applications, their access, connections and the services they require throughout the lifespan of the application, in all its use cases.

So we see its not all about layering on security processes to our factories operations and then attempting to do the work it as quickly and flawlessly as possible – its about making it easier for the developers to add security functions into the applications as they are built and allowing security teams to come in as the application goes live and to set the proper configurations according to the organizations policies.

Don't miss