11 Points to Consider When Virtualizing Security
Virtualized computing resources can save organizations money, but awareness of the security implications must be to the fore in any discussion
As the name suggests, virtualization creates a virtual rather than physical version of something, such as operating systems, servers, storage devices, or network resources. Virtualized technologies are separated from the underlying hardware on which they run, meaning they do not need the full use of the hardware; several different virtualized technologies, therefore, can run on the same equipment leading to greater efficiency and flexibility for system operators.
For security, virtualized environments present opportunities and challenges. For example, centralized storage prevents data loss if a device is lost, and if an attack occurs then only one operating system may be affected – provided proper isolation is in place. On the other hand, however, having large segmented security zones can create significant attack surfaces and entry points to a system.
In this article, we present 11 discussion points regarding security and virtualization.
A sandbox is a tightly controlled environment where software can be run. As part of a virtualization system, sandboxing isolates different programs and prevents malicious code from damaging or snooping on the rest of the system. Code running in a sandbox has restricted permissions meaning tighter approaches to security when new programs are introduced.
Even if you do not operate a fully virtualized environment, sandboxing is a technique that can improve overall system security.
If you use virtualization software to partition physical servers into a number of smaller, then you end up with several virtual servers. Each virtual server can run its own operating systems and programs, meaning you do not need a different physical server to run each of your systems.
While the efficiencies gained from virtual servers are attractive, several risks must also be addressed. These include: security of offline and dormant virtual machines (VMs), control over your virtual network, VM sprawl (easily created VMs brought into use months after latest security patch issued), and sensitive data held on a VM.
Desktop virtualization separates your desktop from the computer you happen to be working on. That is, you can log on to another computer and you will still see your own desktop, since it is stored on a centralized, or remote, server and not any particular physical machine.
Mobility and bring-your-own-device (BYOD) initiatives are becoming more popular. They carry inherent risks but virtualized desktops facilitate consolidated file and program storage by adding a centralized security layer. In addition, compliance may also be easier to manage.
Desktop virtualization offers several advantages but issues with managing the endpoints (where people access the network/system) must be addressed. Organizations should start with a network protection strategy that deploys certain standards when granting access to devices. Centralized controls and safeguards on the devices themselves are also required.
Endpoint security requirements often include checking for an approved operating system, the presence of updated virtual private network systems and anti-virus software. Organizations must also note, however, that software is not a cure-all solution. Management and resourcing changes may be required, such as security personnel dedicated to prevention and detection across all endpoint technologies.
Virtual storage is simply storage created in a virtual environment, usually associated with virtual servers. To the user, the entire virtual machine acts like a fully-fledged computer, complete with an operating system and storage drive of its own. Commodity hardware can be used to provide enterprise-level storage via appropriate virtualization software.
Storage virtualization also helps administrators perform the tasks of backup, archiving, and recovery more easily, and in less time, by disguising the actual complexity of a storage area network. However, issues that must be considered include: storage provider’s compliance with industry standards, data not stored under physical control of owners, and increased frequency in denial-of-service attacks.
Renting just enough virtual resources is more cost-effective than buying a big bulky physical server, right? This used to be the standard business model, where labor for the organization’s IT needs was essentially outsourced. Less on-site equipment meant less manual and less technical maintenance.
While these savings still hold true in many circumstances, the previously straightforward decision to virtualize computing has recently become somewhat cloudy. Driving up costs are business growth (more virtual resources required), distributed servers (moving from large, mid-range VMs to low-cost distributed ones), and VM sprawl (servers provisioned too easily, leading to over-provision and lack of control).
The impact of a security breach must be considered too when considering traditional versus virtual environment. A 2015 report from the Kaspersky Lab suggested that recovery costs were twice as high with virtualization compared to its traditional counterpart.
Temporal or performance isolation among VMs refers to isolating the behavior of multiple VMs, despite them running on the same physical host and sharing a set of physical resources. Issues can arise when, for example, a VM with a temporary peak in resource requirements disturbs its neighboring VM, causing a drop in performance. The knock-on effects would be uncertain and security mechanisms relying on either VM may be affected.
It is desirable then for the performance of any virtualized resource to be as stable and predictable as possible, especially for critical security tasks. Adequate isolation can achieve this state and various techniques are possible, mostly involving different scheduling procedures at processor, network, and even disk levels.
If you wish to move from the physical server world to a virtual environment, here are a few initial planning pointers:
(a) Hardware: is your existing hardware sufficient to work with the latest virtual resources?
(b) Capacity: more isolation between environments increases security, but this also consumes more resource space – how much isolation do you need?
(c) Software: Microsoft, VMware, Red Hat, servers, desktops, applications – take your pick! Identify what you need and discuss options with an experienced IT engineer
(d) Overload: be realistic about what your virtualized infrastructure can handle – while a single server can theoretically run dozens of VMs, this is not an ideal scenario
(e) Plan, review, plan – re-read this article in a month’s time and update your virtualization plans accordingly and in line with the likely impacts (positive and negative) on overall security
Technical Case Study: Telecom Industry
The telecom industry is fast embracing virtualization and the following points relate to the impact on security in this field.
In the telecom industry, network slicing means dividing mobile networks into smaller parts, thereby reducing the number of access points (nodes) that need to be monitored at any particular time for security issues. This ‘divide and conquer’ approach allows more accurate anomaly detection since you’re dealing with smaller, virtualized network slices.
Network slicing is similar to Virtual Private Networks (VPNs), and allows traffic associated with certain users or programs to be isolated from other network traffic. Security isolation at the network access point can be achieved via cryptography, which is specified for all current mobile network generations. Bandwidth requirements must also be factored in when operating virtualized slices and their access nodes.
Sharing resources and isolation capabilities from network slices is critical for multi-tenant scenarios. The concept of sharing threat and vulnerability information is equally important. In addition to designing and provisioning end-to-end security services, these aspects facilitate a cyber threat management framework and a threat intelligence by leveraging tenant functions (e.g. security monitoring and threat/vulnerability identification). This leads to remediation actions derived from a shared, consolidated knowledge base.
The security service operator creating different network slices for the various security services of his tenants, can offer centralized cross-tenant (or slice) analytics and prediction services.
At the application level, many data centers have introduced the concept of micro-segmentation. Whereas traditional security models regulate incoming and outgoing traffic at the edge of the data center (north-south), micro-segmentation aims to remove this single-point-of-failure by monitoring traffic inside the data center (east-west).
Compared to network slices, micro-segments can provide more fine-grained isolation, specific access controls, and stricter security policies. Software-defined networks will additionally define flow control policies at highly-granular levels such as the session, user, device and application level.
Virtual machine security can mean securing many things: images of virtual machines on a host, access to the administration of virtual machines, software inside virtual machines, up-to-date patches for software inside a VM, responding to compromises of software inside virtual machines, etc.
As with any new technology, the first thing to remember for security and virtualization is, “Do no harm.” If you are moving certain aspects of your computing requirements to a virtualized environment via a gradual approach, the question to ask at each stage is, “How will this affect the overall security of my organization?”
Server Virtualization: A Definition, The Balance
Incorrect VM Isolation, TeleLink
Is Your Wi-Fi Access Point Secure?, Aerohive Networks
What should you learn next?
Data Center Micro-Segmentation, VMware