How healthcare CISOs can automate cloud security controls

Cloud environments provide many benefits, primarily involving their ease of scalability and resilience. Those qualities exist because of automation and the easy and straightforward way to leverage that to enhance a cloud environment.

healthcare automate cloud security

While that ease through automation can have drawbacks—and if you’ve ever gotten a surprise bill from your cloud provider you know those drawbacks acutely—it can be leveraged for great economies of scale. One place that automation is a huge boon is in securing the cloud environment.

This article will outline some of the ways CISOs in the healthcare sector can automate cloud security controls and integrate those controls into standard deployment cycles. The underlying theme for all this advice: keep things standard and uniform.

Pick a framework

Consistency is key because it provides predictability. Unpredictability can lead to impacts to your production environment.

There are many cloud security frameworks and best practices. Cloud Security Alliance (CSA), the National Institute of Standards and Technology (NIST), and HITrust are just among the few that offer them.

The best practices and recommendations are largely the same, but how you achieve those may be more or less prescribed based on a framework.

The important consideration here is to pick a framework and stick to it. Halfheartedly implementing a framework and then switching to another can add unnecessary complexity and internal confusion and conflict.

Treat infrastructure and policy as code

Treating assets as code makes for uniform and auditable environments. Security settings can be configured and validated based on framework best practices. Exceptions can be noted and documented (if desirable) or remediated (if undesirable).

In the case of treating infrastructure as code, native and third-party cloud management platforms enable users to templatize security configuration for infrastructure and store those templates for easy use every time a new environment needs to be stood up.

This becomes even more straightforward in containerized environments where container settings can be configured in a centralized management platform allowing for uniform and secure deployment every time a container is created.

Policy as code operates largely the same. Templatizing and reusing policy for network configurations, access provisioning, and authorization provides consistent deployment of networking and user accounts. Those settings become consistent and predictably deployed.

What’s more: creating centralized repositories of management settings for infrastructure, containers, and policy allows for auditing those repositories. Not only are configurations auditable and consistent, but access to those repositories can also be managed consistently and audited. If a change is made, whether intentional or not, the security team can see and validate that change. If that change is unpredictable, someone can take action.

Automate asset tagging

Asset tagging is a critical feature for preserving information governance and identifying the sensitivity—and therefore risk—of a cloud asset. Security teams can collect information or take action based on tags. That tagging can be automated either through an infrastructure-as-code approach or using cloud-provided compliance policies to automate asset tagging.

If you’re layering Kubernetes, Docker, or some other containerization technology on your cloud platform infrastructure, you should employ labeling to tag those ephemeral assets with metadata consistent with your cloud tagging schema.

Automating tagging can pay dividends down the road. For example, if you know certain assets contain PHI, prescribing data, or other sensitive information, you can leverage automated tags to flag those assets at creation and avoid rogue data sets. Note that you will always need some manual intervention, but automated asset tagging will reduce that substantially.

Automate security infrastructure deployment

Depending on what tags are assigned to your asset, you may want to take different actions based on tags. That might include collecting more or less telemetry to be fed into your SIEM or other centralized monitoring infrastructure. If it’s a virtualized server, that might include deploying detection and response infrastructure and other supporting agents, if appropriate.

There are numerous considerations for how to configure and deploy security infrastructure and what data to collect or not. One major consideration is cost, especially if your centralized aggregation infrastructure sits outside of your cloud environment and you need to pay egress charges for that log data. Those costs can add up fast. On the upside, performance shouldn’t be impacted too heavily.

Vulnerability scanning

It should go without saying that vulnerability scanning needs to happen and it needs to happen for all assets. If you don’t automate the inclusion of new assets to vulnerability management tools, then you may be missing large swaths of your environment. That leaves you exposed to potential attack and compromise.

As with infrastructure deployment, cost is a major factor here. Where many vulnerability scanners price based on number of IP addresses or containers scanned, spinning up many new servers or containers all at once to address high volume traffic may result in unforeseen expenses, especially if your vendor is unwilling to work with you on temporary scaling.

Treating infrastructure as code is a critical way to avoid those costs. If some of your templates don’t need continuous scanning, then you don’t need to and shouldn’t deploy it. If they do, then you can monitor those specific pieces of infrastructure to model cost on an ongoing basis.

Security orchestration, automation, and response

SOAR is effectively treating security remediation as code. It differs in some material respects, but SOAR effectively provides a repository and templatization of security remediation. Where an event happens, predictable actions can be applied to that, even if it happens on a case-by-case basis.

If your cloud environment is built on a foundation of consistency in deployment, then SOAR should be incredibly predictable and highly tunable. Consistency in the cloud environment means more granularity for a security baseline. The more granular the baseline, the more obvious are the deviations from that baseline and the more actionable (and automatable) security reactions.

There may be other production systems for which manual intervention is needed prior to acting. Electronic medical record and imaging servers come to mind as potential candidates for enhanced scrutiny. They are also the most noticeably impacted by adverse changes impacting uptime, which might also warrant enhanced manual scrutiny. If tagged appropriately, it should be straightforward to identify and exclude those assets from playbook implementation.

Conclusion

Automating cloud security controls can be straightforward, if they’re planned, strategic, and consistent. That planning and consistency will pay dividends to model risk, cost, and actionability. Failing to plan appropriately can lead to unnecessarily increased risk and cost—in some cases very significant.

Don't miss