Cloud security

Virtualization security in cloud computing

Karthik
June 21, 2012 by
Karthik

2011 ended with the popularization of an idea: Bringing VMs (virtual machines) onto the cloud. Recent years have seen great advancements in both cloud computing and virtualization On one hand there is the ability to pool various resources to provide software-as-a-service, infrastructure-as-a-service and platform-as-a-service. At its most basic, this is what describes cloud computing. On the other hand, we have virtual machines that provide agility, flexibility, and scalability to the cloud resources by allowing the vendors to copy, move, and manipulate their VMs at will. The term virtual machine essentially describes sharing the resources of one single physical computer into various computers within itself. VMware and virtual box are very commonly used virtual systems on desktops. Cloud computing effectively stands for many computers pretending to be one computing environment. Obviously, cloud computing would have many virtualized systems to maximize resources.

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

Keeping this information in mind, we can now look into the security issues that arise within a cloud-computing scenario. As more and more organizations follow the "Into the Cloud" concept, malicious hackers keep finding ways to get their hands on valuable information by manipulating safeguards and breaching the security layers (if any) of cloud environments. One issue is that the cloud-computing scenario is not as transparent as it claims to be. The service user has no clue about how his information is processed and stored. In addition, the service user cannot directly control the flow of data/information storage and processing. The service provider usually is not aware of the details of the service running on his or her environment. Thus, possible attacks on the cloud-computing environment can be classified in to:

  • Resource attacks:These kinds of attacks include manipulating the available resources into mounting a large-scale botnet attack. These kinds of attacks target either cloud providers or service providers.
  • Data attacks: These kinds of attacks include unauthorized modification of sensitive data at nodes, or performing configuration changes to enable a sniffing attack via a specific device etc. These attacks are focused on cloud providers, service providers, and also on service users.
  • Denial of Service attacks: The creation of a new virtual machine is not a difficult task, and thus, creating rogue VMs and allocating huge spaces for them can lead to a Denial of Service attack for service providers when they opt to create a new VM on the cloud. This kind of attack is generally called virtual machine sprawling.
  • Backdoor: Another threat on a virtual environment empowered by cloud computing is the use of backdoor VMs that leak sensitive information and can destroy data privacy.
  • Having virtual machines would indirectly allow anyone with access to the host disk files of the VM to take a snapshot or illegal copy of the whole System. This can lead to corporate espionage and piracy of legitimate products.

With so many obvious security issues (and a lot more can be added to the list), we need to enumerate some steps that can be used to secure virtualization in cloud computing.

The most neglected aspect of any organization is its physical security. An advanced social engineer can take advantage of weak physical-security policies an organization has put in place. Thus, it's important to have a consistent, context-aware security policy when it comes to controlling access to a data center. Traffic between the virtual machines needs to be monitored closely by using at least a few standard monitoring tools.

After thoroughly enhancing physical security, it's time to check security on the inside. A well-configured gateway should be able to enforce security when any virtual machine is reconfigured, migrated, or added. This will help prevent VM sprawls and rogue VMs. Another approach that might help enhance internal security is the use of third-party validation checks, preformed in accordance with security standards.

 

 

In the above figure, we see that the service provider and cloud provider work together and are bound by the Service Level Agreement. The cloud is used to run various instances, where as the service end users pay for each use instant the cloud is used. The following section tries to explain an approach that can be used to check the integrity of virtual systems running inside the cloud.

Checking virtual systems for integrity increases the capabilities for monitoring and securing environments. One of the primary focuses of this integrity check should the seamless integration of existing virtual systems like VMware and virtual box. This would lead to file integrity checking and increased protection against data losses within VMs. Involving agentless anti-malware intrusion detection and prevention in one single virtual appliance (unlike isolated point security solutions) would contribute greatly towards VM integrity checks. This will greatly reduce operational overhead while adding zero footprints.

A server on a cloud may be used to deploy web applications, and in this scenario an OWASP top-ten vulnerability check will have to be performed. Data on a cloud should be encrypted with suitable encryption and data-protection algorithms. Using these algorithms, we can check the integrity of the user profile or system profile trying to access disk files on the VMs. Profiles lacking in security protections can be considered infected by malwares. Working with a system ratio of one user to one machine would also greatly reduce risks in virtual computing platforms. To enhance the security aspect even more, after a particular environment is used, it's best to sanitize the system (reload) and destroy all the residual data. Using incoming IP addresses to determine scope on Windows-based machines, and using SSH configuration settings on Linux machines, will help maintain a secure one-to-one connection.

Lightweight Directory Access Protocol (LDAP) and cloud computing

LDAP is an extension to DAP (directory access protocol), as the name suggests, by use of smaller pieces of code. It helps by locating organizations, individuals, and other files or resources over the network. Automation of manual tasks in a cloud environment is done using a concept known as virtual system patterns. These virtual system patterns enable fast and repeatable use of systems. Having dedicated LDAP servers is not typically necessary, but LDAP services have to be considered when designing an efficient virtual system pattern. Extending LDAP servers to cloud management would lead a buffering of existing security policies and cloud infrastructure. This also allows users to remotely manage and operate within the infrastructure.

Various security aspects to be considered:

  • Granular access control
  • Role-based access control

The directory synchronization client is a client-residential application. Only one instance of DSC can be run at a time. Multiple instances may lead to inconsistencies in the data being updated. If any new user is added or removed, DSC updates the information on its next scheduled update. The clients then have the option to merge data from multiple DSCs and synchronize. For web security, the clients don't need to register separately if they are in the network, provided that the DSC used is set up for NTLM identification and IDs.

A host-side architecture for securing virtualization in cloud environment:

The security model prescribed here is purely host-side architecture that can be placed in a cloud system "as it is" without changing any aspect of the cloud. The system assumes the attacker is located in any form within the guest VM. This system is also asynchronous in nature and therefore is easier to hide from an attacker. Asynchronicity prevents timing analysis attacks from detect this system. The model believes that the host system is trustworthy. When a guest system is placed in the network, it's susceptible to various kinds of attacks like, viruses, code injections (in terms of web applications), and buffer overflows. Other lesser-known attacks on clouds include DoS, keystroke analysis, and estimating traffic rates. In addition, an exploitation framework like metasploit can easily attack a buffer overflow vulnerability and compromise the entire environment.

The above approach basically monitors key components. It takes into account the fact that the key attacks would be on kernel and middleware. Thus integrity checks are in place for these modules. Overall, the system checks for any malicious modifications in the kernel components. The design of the system takes into consideration attacks from outside the cloud and also from sibling virtual machines. In the above figure the dotted lines stand for monitoring data and red lines symbolize malicious data. This system is totally transparent to the guest VMs, as this is a totally host-integrated architecture.

The implementation of this system basically starts with attaching few modules onto the hosts. The following are the modules along with their functions:

Interceptor: The first module that all the host-traffic will encounter. The interceptor doesn't block any traffic and so the presence of a third-party security system shouldn't be detected by an attacker; thus, that the attacker's activities can be logged in more detail. This feature also allows the system to be made more intelligent. This module takes the responsibility of monitoring suspicious guest activities. This also plays a role in replacing/restoring the affected modules in the case of an attack.

Warning Recorder: The result of the interceptor's analysis is directly sent to this module. Here a warning pool is created for security checks. The warnings generated are prioritized for future reference.

Evaluator and hasher: This module performs security checks based on the priorities of the warning pool created by the warning recorder. Increased warning will lead to a security alert.

Actuator: The actuator actually makes the final decision whether to issue a security alert or not. This is done after receiving confirmation from Evaluator, hasher, and warning recorder.

This system performs an analysis on the memory footprints, and checks for both abnormal memory usages and connection attempts. This kind of detection of malicious activity is called an anamoly based detection. Once any system is compromised the devious malware tries to affect other systems in the network until the entire unit is owned by the hacker. Targets of this type of attack also include the command and control servers, as in case of Botnets. In either case, there is an increase in memory activity and connection attempts that occur from a single point in the environment.

Another key strategy used by atteckers is to utilize hidden processes as listed in the process list. An attacker performs a dynamic data attack/leveraging which hides the process he is using from the display on the system. The modules of this protection system performs periodic checks of the kernel schedulers. On scanning the kernel scheduler, it would detect hidden structures there by nullifying the attack.

Current Implementation:

This approach has been followed by two of the main open-source cloud distributions, namely Eucalyptus and OpenECP. In all implementation, this system remains transparent to the guest VM and the modules are generally attached to the key components of the architecture.

Performance Evaluation:

The system claims to be CPU-free in nature (as it's asynchronous)and has shown few complex behaviors on I/O operations. It's reasoned that this characteristic is due to constant file-integrity checks and analysis done by the warning recorder.

In this article, we have seen a novel architecture design that aims to secure virtualization on cloud environments. The architecture is purely host-integrated and remains transparent to the guest VMs. This system also assumes trustworthiness of the host, and assumes attacks originate from the guests. As in security, the rule of thumb says: Anything and everything can be penetrated with time and patience. But an intelligent security consultant can make things difficult for an attacker by integrating transparent systems so that they remain invisible and that it takes time for hackers to detect these systems under normal scenarios.

Learn Cloud Security

Learn Cloud Security

Get hands-on experience with cloud service provider security, cloud penetration testing, cloud security architecture and management, and more.

Sources

Karthik
Karthik

Karthik is a cyber security researcher at Infosec Institute and works for Cyber Security and Privacy Foundation (a non-profit organization) as a researcher, in India. He finds deep interest in Information security as a whole, and is particularly interested in VA/PT and serving to the cause for Nation's Security.