Cloud security

NetFlow Data Collection in Cloud Systems

June 12, 2017 by Frank Siemons

The value of NetFlow

Within a network, connectivity is everything, but within a secure network, visibility is also everything. NetFlow data can provide both these requirements. NetFlow is a Cisco proprietary technology which allows for the collection of metadata generated by the traffic (flow) within a network. This metadata is invaluable for capacity planning of a network. It is also highly important for availability and security monitoring and forensic security analysis. It could, for instance, explain to a network engineer where the bottlenecks are within a network. It could also explain how a user was infected with malware or how compromised data was exfiltrated from an organization. Some NetFlow analysis tools can alert based on set thresholds and anomalies as well. This would allow for the monitoring of for instance DDoS attacks or Port Scan activity with a relatively low-cost solution.

What is collected?

The bare minimum values NetFlow collects are Source IP, Destination IP, Source Port, Destination Port and the used Protocol (UDP, TCP, ICMP). Together, these five values are called the five-tuple. Other, more advanced versions of NetFlow, such as Cisco’s Flexible NetFlow can also collect data around TCP Flags, interfaces and even parts of actual packets. The data fields to be collected and stored are fully customizable.

NetFlow is only one step short of the collection of full packet capture data. Such a full packet capture solution is significantly harder to implement at scale and is usually far too expensive for normal organizations to deploy and maintain. Some security use cases will even require both options though. For instance, anomaly detection works much better with NetFlow data than with full packet capture data. On the other hand, full packet capture data is much more valuable for a full forensic investigation around malware or a data leak. Depending on the sector and jurisdiction, some organizations simply might have little choice due to legal and compliance requirements.

Of course, privacy is also much more of a concern around a full packet capture functionality than around the collection of network traffic metadata.

Challenges within the cloud

NetFlow data is sent in the form of UDP messages from compatible devices such as switches, routers, and firewalls, to a NetFlow collector. In a traditional, physical network full of compatible switches and routers this is an easy process. It is much harder to collect this data within a cloud platform where servers are Virtual Machines, connected via the cloud providers implementation of a transparent virtual network. If there is no NetFlow compatible switch between 2 virtual cloud servers, how can NetFlow data be collected? Even though the servers are virtual, there is still much value in collecting the metadata of the traffic between them. An attacker that has successfully breached the cloud network perimeter, for instance via an SQL Injection attack and has full access to for instance a database server, might want to use another server to exfiltrate the obtained data. Such a spike in traffic between 2 internal servers is an anomaly that should be picked up by monitoring the network traffic, whether the systems are inside a cloud platform or inside a traditional on-premises data center.

How to make this work

Simply put, if two virtual servers are in direct connection with each other, these servers are the only points where the traffic metadata is visible. This is why Cisco has created an agent that needs to be installed directly on the cloud servers. These agents collect the flow data and send it to the cloud-based or on-premises monitoring software via a cloud concentrator. It can then be ingested as normal NetFlow data.

Cloud providers such as Amazon have come up with a range of solutions as well, such as Amazons VPC Flow Logs. This is NetFlow-like metadata collected in an AWS Virtual Private Cloud environment, made available to the customer for storage and analysis. There are some limitations to these solutions, however. Some values in the metadata might not be available in this format, and some analysis and monitoring software might not support the data format out-of-the-box.

Probably the best solution, if the cloud platform allows for it, is the configuration of NetFlow in virtual switches and virtual routers. This is not always an option because most larger cloud platforms do not support managed, internal network devices. If the platform supports the use of, for instance, a Virtual Switch or Router though, it is simply a matter of configuring a traditional NetFlow infrastructure around these devices.

Other Network Traffic Metadata collection technologies

This article discusses NetFlow, but there are other technologies as well. The main one is IPFIX. This is the IETF adaptation of NetFlow. Although there are some differences in the format of the data and some limitations in the fields in either of these, the principles of metadata collection and storage are very much the same. The same challenges and solutions around cloud platforms exist. IPFIX has some compatibility advantages, however, because it is not a Cisco proprietary technology.


Network metadata can provide much value to an organization, both from an availability and a security point of view. The challenge around its collection within a cloud environment is that this technology is designed around routers and switches which are not available in that cloud environment. The current solutions seem mostly a workaround for an issue that might not have a proper fix. Network monitoring in a traditional way is hard to do when there is no traditional network, and there are no traditional network devices. There is a point to be made that an entirely new approach or product is needed, especially when cloud adaptation is only increasing in the future. The solution of deploying a Cisco server agent for NetFlow collection, or a 3rd party agent for IPFIX collection, seems to be going in that direction. It does not require any changes to the cloud infrastructure, provides full visibility and (other than licensing) is very low-cost to deploy.

Posted: June 12, 2017
Frank Siemons
View Profile

Frank Siemons is an Australian security researcher at InfoSec Institute. His trackrecord consists of many years of Systems and Security administration, both in Europe and in Australia. Currently he holds many certifications such as CISSP and has a Master degree in InfoSys Security at Charles Sturt University. He has a true passion for anything related to pentesting and vulnerability assessment and can be found on His Twitter handle is @franksiemons