Remember the first time you drove a car on your own, and you’d get a kick from the sensation of sheer speed? Unfortunately, you also have to learn the mundane stuff like how to turn, stop and reverse safely. The same is true in organizations that deploy virtualization.
Compared with deploying physical servers and apps, virtualization is like driving a new sports car. It’s so easy to move quickly. But just like when you were learning to drive, you’ve got to find out how to do it safely. It’s easy to be seduced by the performance and ease of virtual machines (VMs), and overlook the more mundane aspects – like security.
Analyst Gartner predicted that in the coming year, 60% of VMs will be less secure than the equivalent physical servers. This makes VMs the target of choice for potential malware or hacking exploits. How do you go about narrowing the security gap between virtual and physical deployments, and what techniques will help mitigate the risks?
The first step is to be realistic about the actual security risks to VMs. There’s a lot of theorising and discussion of potential risks, such as new types of malware that targets hypervisors, or other vulnerabilities. Certainly, malware attacks specifically targeting VMs will appear as usage continues to grow.
However, the more pressing and important issue is ensuring your virtual environment is designed and built as robustly as your physical network. Remember that we’ve all had to go through 15 years of painful experience in securing servers and data against constantly-evolving threats, and in developing robust network architectures to give the right security framework. It’s vital that this isn’t overlooked when deploying a virtualized environment. In fact, this hard-won security knowledge stands you in very good stead when it comes to securing the virtual world.
This means going back to basics, and looking at which applications are being moved from physical servers to VMs – and to audit what VMs may already be in use in the organization. It’s easy to get carried away with the performance benefits, and overlook the fact that the applications running on the VMs need to be segregated – for example, a public e-commerce application (that may have sat in the organization’s demilitarized zone) and an internal CRM system. You wouldn’t want these apps to have a physical link between their servers without firewalling, so the same applies with VMs running on a single server. They need segregating too, to maintain security.
In the same way, if you were running a mission-critical app on a traditional server, you’d ensure the operating system was hardened, was updated with the latest patches, and running an updated anti-malware suite. Ensure you so the same to every VM you run, too. And once you’ve started to deploy VMs, you need to unify the VM security – the updating, patching, anti-malware, firewalling and so on – under the same security management console with which you control the physical network. After all, one of the major headaches for IT teams now is managing what they have. Do you really want to add another layer of complication to management?
In conclusion, those who don’t learn from history are condemned to repeat it. The key security lessons of the last 15 years are still valid and relevant in a virtualized world.So by starting with the basics, you’ll have a robust platform for the next generation of computing apps.