The rapid move towards virtualization and cloud infrastructure is delivering vast benefits for many organizations. In fact, Gartner has estimated that by 2016, 80% of server workloads will be virtualized. The reasons are clear: better availability, improved cost-efficiency from hardware investments, and better SLAs.
And while many companies continue their quest to convert their own datacenters into true self-service private or hybrid clouds, the growth of public cloud is also undeniable. For companies, the public cloud beckons with unprecedented agility and responsiveness. For users, the ease of spinning up an environment for a pilot project in a public cloud in a matter of minutes is compelling – especially when compared to month-long wait times many experience when requesting internal server resources from IT.
Yet as research firm Forrester pointed out, “customers initially adopted cloud services to raise business agility at an efficient cost, but increasingly seek to provide new functions for mobile users and modernize their applications portfolios. But concerns about security, integration, performance, and cost models remain.”
Why is cloud security different?
Virtualized infrastructure is the foundation of any cloud—public or private—and virtual workloads need different security. Traditional data centers had natural air gaps, with a set of applications dedicated to each server, a defined administrator for each application, and a defined perimeter around the datacenter. A virtualized datacenter is different.
By nature, a virtual machine is just a set of files, which makes it very easy to copy, suspend and re-instantiate them on any other piece of hardware. This dramatically increases the ease with which someone could either accidentally or maliciously cause application or datacenter downtime, or steal or expose sensitive or confidential data. Further, in a hybrid or public cloud model the definition of “perimeter” changes drastically. Applications and data are no longer physically segmented or contained.
Private and public clouds introduce new concerns around infrastructure security, application and data mobility, and availability and uptime.
The cool thing about virtualization is that you get better hardware utilization by “floating” applications on a hypervisor. The scary thing about virtualization is that it becomes possible to compromise the hypervisor, which can impact every application running above it. Also, those that manage the virtual infrastructure (or someone who compromises their credentials), have far-reaching privilege, unless the right controls are in place.
Consider the case of Code Spaces, a technology company leveraging Amazon’s AWS Infrastructure as a Service cloud to host its applications. An attacker was able to hack into to the Code Spaces management console in AWS and delete literally every virtual server, putting the company out of business.
For organizations that want to virtualize sensitive or mission critical applications, there are technologies like Intel Trusted Execution Technology (TXT) that can create validation all the way from the chipset through to the hypervisor, ensuring that applications can’t boot unless they are on a trusted platform.
The second concern that must be addressed is virtual machine mobility. As you think about the applications you want to virtualize, consider the implications if a virtual machine was copied, or accidentally backed up or replicated to a server outside your datacenter. Would you risk exposing proprietary company data? Is your organization subject to regulations or mandates that require personally identifiable data be kept inside country or regional boundaries?
Leveraging firewalls, boundary controls, and other technologies, it is possible to re-create the segmentation typically lost with virtualization. For example, a government agency could define policies to ensure that the resources associated with Mission A never cross paths (or administrators) with those of Mission B. Or an organization that is required to comply with the Payment Card Industry Data Security Standard (PCI-DSS) should be able to ensure that applications and PCI data are contained to hardware tagged for this purpose. Traditionally, organizations have simply not virtualized these types of applications in order to reduce PCI scope. But now it is possible to do so, as long as you have the proper controls in place.
If you’re using the public cloud, make sure you understand the service level agreements with your cloud service provider (CSP). CSPs will often replicate virtual machines in the cloud to ensure availability and make sure they maintain their SLAs. Ask them how they are making sure that your apps and data stay where they belong. CSPs should be caretakers, but you ultimately own (and are responsible for) your applications and data.
It’s also critical to consider data privacy. The cloud concentrates both applications and data, and therefore if attackers get in, they can reach a treasure trove. Encryption is a proven method to ensure that data remains private, even in the event that someone manages to break through access controls or gain privileged user access. And make sure your company retains control of the encryption keys, not your cloud service provider. This can also be of value when you wish to change providers or terminate a contract with a CSP. If the data is encrypted, you can be sure that you’re not leaving any sensitive data behind that might be copied from storage devices or other backup systems. Further, encryption can ease the cost, burden and brand damage associated with notification in the unfortunate event you do have a breach, as 48 of the 50 US states have safe harbor clauses in their disclosure laws.
Availability and uptime
Hanlon’s Razor states: “Never attribute to malice that which can be adequately explained by stupidity.”
The reality is that basic human error accounts for a significant percentage of datacenter downtime. With virtualization, it’s far easier for simple errors to have far-reaching impact. For example, a virtual machine can be suspended or deleted with a mouse click. If that VM is running your credit card processing system, the implications —and cost— can be enormous. IT organizations consistently seek to ensure availability, and for cloud service providers, uptime is mission-critical.
In addition to the basics of hiring good people and maintaining their training, there are some other ways to improve datacenter uptime. Consider implementing controls that can prevent virtual machines from being accidentally or purposely moved to hardware with less performance.
Why it matters
With the cost of breaches growing every year, and the volume of regulations designed to assure the right behavior for companies that handle sensitive data burgeoning, most IT security organizations have reached a fork in the road. They must either choose to make the right investments in technology, people and policy to allow them to continue a secure path to the cloud, or they can choose to maintain the status quo, and risk becoming another headline.