Network security

Common Continuous Monitoring (CM) Challenges

November 21, 2018 by Dan Virgillito

Continuous monitoring (CM) is a crucial step for organizations to detect and mitigate the security events that may result in breaches. It offers detailed, up-to-date compliance and network status insights in the shape of real-time reporting that can be used to identify inconsistencies in internal controls, information security violations or unexpected changes in how systems are being operated. In an ideal world, organizations could simply deploy a CM solution log several bytes of machine-generated data for analysis and wait for the red flags of cyber-intrusions to show up.

In the real world, however, implementing a CM solution can be a complex process, especially at organizations that have multiple networks and systems running across several geographically-distributed sites. That’s because large and complex IT environments need CM to not just say what happened (like an organization can discover analyzing log files), but also offer visibility concerning the context of what happened. This means there are some challenges connected to implementing a CM solution.

Here’s a look at some of those common challenges faced by such organizations and the measures they can take to overcome them.

Keeping Tabs on Endpoint Activity

Endpoints have always been challenging to track, even before CM solutions existed, because of their nature. Internal and external stakeholders can introduce a new endpoint whenever they want, like connecting to a neighboring company’s network. Moreover, endpoints aren’t just limited to desktop PCs; they can include Wi-Fi, printers, smartphones and even wearables like Google Glass. Unless an organization’s CM solution can track newly-created and existing endpoints at all times, it can easily lead to oversight.

For overcoming this challenge, organizations need to take a hybrid approach to continuous monitoring. Pairing passive, real-time monitoring with an always-on active scanner can provide you with both clarity of vulnerable endpoints and detection of newly-created assets. When leveraging both an always-on and passive scanner, you also get the ability to detect the network activity of unknown devices that aren’t present in the official asset list or provision on the MDM (master data management) platform. This is how an enterprise can effectively create a more modern security infrastructure for endpoint activity.

Making Sense of Overlapping and Conflicting Data

Another big challenge for CM advocates is that of data integration. Many organizations use various tools that occupy the endpoint subsystem, e.g., they may use code analyzers to identify software flaws, threat intelligence software to detect vulnerabilities in networks and network access controllers to trace devices. Hence, the CM solution is to piece datasets together from multiple, disparate systems, and that often leads to overlapping and/or conflicting data. Immature tools may also inaccurately perform scans on devices so the generated reports may be questionable.

The SCAP (Security Content Automation Protocol) standard may help companies in alleviating some of these issues, but they have their own complexities. As most analysts know, leveraging standards doesn’t always lead to interoperability because of different interpretations, variations in the versions of supported tools and so on.

Applying MDM techniques can help enterprises address some of these challenges. For instance, cross-references and other capabilities can be used to determine master identifiers for devices along with all the additional identifiers used by different sensors, such as IP (Internet protocol) address, MAC address and server name. However, several different system reports will need to be analyzed to ensure that data is clean, complete and of high quality.

Besides all that, firms can implement a trust model that defines that tools/sensors/endpoints to trust for what kind of findings. Reports from authenticated, always-on scanners, for example, could be labelled as more credible than reports generated by network-based monitoring solutions.

Addressing Non-Integrated Security Control Mechanisms

This complexity stems from the previous challenge. Many companies develop a false sense of security because they fulfill technical standards and went on an infosecurity deployment spree for every control area. However, the threat landscape changes when they burden their infrastructure to bolster their defenses. Heterogeneous data centers only add to the complication.

Hence, organizations have no other option than to tweak/configure their CM to meet the different requirements that could not be addressed with a one-size-fits-all mentality. Also, it is tangibly impossible for security analysts to monitor 100TB of disparate data in a single day. They can no longer afford to sift through various logs, and therefore they need a solution that fuses all those data generators together.

The key is to have a SIEM (security information and event management) system that can ingest data from a range of sources and streamline security controls in a heterogeneous data center ecosystem. SIEM in conjunction with CM will also enable companies to correlate activities across different hosts and see threats categorized into distinct parts, and then recreate a range of events to identify whether a threat can be mitigated or it’s too late. In other words, while a continuous monitoring system might detect part of a security breach and flag a specific event, the SIEM system will correlate the threat vectors for all of the events.

Meeting Compliance Goals

How “continuous” can a CM solution be? To establish their ideal frequency, organizations should consider the complexity and size of their infrastructure, the manual and automated processes deployed, the sensitivity of the information being secured and the overall resources available for addressing compliance-related deviations within an acceptable time frame. CM could mean testing once a week, once a month or even once a day.

Regardless of frequency, companies need to understand how that frequency will impact the personnel responsible for compliance. Though automated assessments on an hour-by-hour basis, for instance, seem like an advantageous way to shut down any vector of threat, in practice it can turn unmanageable because of constant management of compliance disruptions, not to mention the reprioritization of resources to address compliance violations adequately.

For organizations embarking on a continuous monitoring journey, the best way to tackle compliance challenges is to begin with an achievable CM frequency. This will allow them to detect and address compliance-related deviations on a timely basis, based on their financial capability and available resources. Cloud environments’ assessments can be automated to allow cloud security and IT teams to streamline their efforts. They’d be able to focus less on research, identification, and prioritization of violations and misconfigurations and more on the remediation measures. As a result, your organization will have a more viable way of both shortening response times and meeting compliance goals.

The Bottom Line

The holy grail of context-aware continuous monitoring is beginning to emerge with today’s information security solutions. And that’s a good thing. However, organizations will also need to keep the associated challenges top of mind as they rely on CM to provide actionable intelligence which they can use to mitigate threats.


  1. Myths and Facts About Master Data Management, Dataversity
Posted: November 21, 2018
Dan Virgillito
View Profile

Dan Virgillito is a blogger and content strategist with experience in cyber security, social media and tech news. Visit his website or say hi on Twitter.