Introduction

Threat hunting is a proactive and iterative approach to detecting threats. It falls under the active defense category of cybersecurity since it is carried out by a human analyst, despite heavily relying on automation and machine assistance. The analyst’s main task is to determine the initial threat to hunt and how that type of malicious activity will be found within the environment. We refer to this challenge as the hypothesis. In this article, we will discuss the different hunting hypotheses and how they can be effectively combined to allow for an effective hunt.

What are the most popular threat-hunting methodologies and hypotheses?

Threat hunters develop hypotheses by carrying out careful observations. These could be as simple as noticing a particular event “just doesn’t seem right,” or something more complicated such as a supposition about ongoing threat-actor activity within the environment. This will be based on a combination of external threat intelligence and past experiences with the actor.

Threat hunters will experience success through platforms that enable them to generate hypotheses while simultaneously reducing any barriers that may hinder testing of the hypotheses. That may be done by, for example, providing ready access to the data and tools needed to perform the tests.

We shall explore the most popular hypotheses and outline how and when to formulate them. They include:

Intelligence-Driven Hypotheses

The understanding of adversary tactics, techniques and procedures (TTPs) through the use of indicators of compromise (IOCs) has led to the concept of intelligence-driven defense. Hunters make use of this intelligence as a basis for the questions that lead them to the formulation of the hypothesis.

For example, consider an adversary that conducts phishing campaigns. If the source of the infrastructure used for the attack is determined to be in Canada, then this may be documented in the form of an IOC. These IOCs could then lead the hunter to form a hypothesis that may result in the threat being found within the defender’s environment. For instance, the hunter would examine the incoming email logs to find messages confirming the attacking IP’s geolocation.

IOC searches may not always lead directly to generating a hypothesis. They may, however, result in the discovery of alerts and log entries that the hunter can then prioritize for investigation. The hunter may then begin to ask questions about the data and the kind of adversary activity they might represent. IOCs provide various pieces of information, including:

  • The locations defenders might be able to find the IOCs within the organization
  • The methods of encryption or obfuscation that are being achieved by the adversary
  • The overlap (if any) between command-and-control servers and multiple intrusions or campaigns
  • Ways in which the adversary is acquiring command-and-control servers and what that says about the adversary’s sophistication

It is important also for hunters to be aware of the source of IOCs. IOCs should originate from trusted sources and also be involved in a specific phase of the kill chain. For instance, IOCs related to the reconnaissance phase will allow hunters to generate hypotheses that differ significantly compared to those generated from IOCs related to, say, exploitation.

Hunters must note that relying entirely on IOCs may result in them being quickly overloaded with low-quality matches. This is because many threat data feeds today lack the context to make them true indicators. Even though these may result in a few positive hits, the many false positives end up wasting a lot of hunting time.

Hunters should think of hunting as more than a static process. Many of the hypotheses formulated during a hunt can be used later, even if there is not enough time to fully explore them initially.

Finally, good intelligence-driven hypothesis generation takes into consideration assessments of the geopolitical and threat landscapes and seeks to combine low-confidence alerts and indicators with additional information that will help determine their usefulness. Intelligence-driven hypotheses can lead to some of the quickest discoveries in an environment. However, hunters must still understand well the environment in which they operate.

Awareness-Driven Hypotheses

Threat hunters need to understand their environment and be able to identify when it changes in some significant way. Having this situational awareness allows hunters to create hypotheses about the type of adversary activity that could occur within their environments.

Hunters focus on the most important assets and information, commonly known as the Crown Jewels Analysis (CJA). The hunter can then begin asking questions that lead to hypotheses about what an adversary might be looking for upon entry into the organization network. The hunter will then narrow down to the most important data to collect in the environment, along with the locations in which the data is located.

Organizations preparing for the Crown Jewels Analysis (CJA) need to:

  • Identify the organization’s core missions
  • Map the mission to the assets and information upon which it relies
  • Discover and document the resources on the network
  • Construct attack graphs. Here the emphasis will be on determining the dependencies on other systems or information, analyzing potential attack paths for the assets and interconnections, and rating any potential vulnerabilities according to severity

This analysis allows hunters to prioritize their most tempting targets by generating hypotheses about the threats that could impact the organization the most. It is also important for hunters to be open-minded and avoid hypotheses that cannot lead to successful hunts.

Hunters should also keep abreast of the rapidly-changing infrastructure, software and vulnerabilities by taking advantage of automation (especially in dashboarding), reporting and risk scoring. Manually observing and documenting all the data flows and assets in an environment is a time-wasting affair.

Finally, it is important for hunters to understand that situational awareness should not be limited to technical aspects only; rather, people, process and business are also critical parts of an organization’s threat landscape. Not accounting for these factors makes defense more difficult.

Analytics-Driven Hypotheses

Hunters must be mindful of biases and other bad analytic habits that might influence them to prejudge a situation they may have picked up. Say, for instance, that the hunter previously worked in a government setting focusing on certain kind of threats. The hunter may find that the domain expertise has introduced biases that influence the hunter’s hypotheses formulation to match those previously faced. Unchecked, bias can lead to defensive attitudes regarding sharing threat data, thus leading to poor analytical conclusions. Hunters may find themselves focusing on a threat when it is no longer active in the environment.

Hunters often rely on models and analytical frameworks to help structure data to reveal patterns despite their biases. An example of this is the Diamond Model of Intrusion Analysis. The DMIA requires hunters to structure the data they find into various categories including adversary, infrastructure, capability and victim. Models allow hunters to structure data for analysis; however, they are not representative of a perfect approach to every situation. It is therefore important for hunters to understand the limitations of their domain expertise and how to defeat cognitive biases.

What methodologies are the most effective?

In order to develop the most effective hypothesis, the hunter should combine the three different types. Intelligence combined with situational awareness and the domain expertise of the hunter will result in hypotheses that are more likely to be successful at discovering threats in the environment. This process should be guided by formal models such as the Hunting Maturity Model (HMM):

How does the maturity of hypotheses compare?

It is important to note that not all hypotheses are good hypotheses. Let’s consider an example that compares between the maturity of hypotheses between novice hunters and more experienced hunters.

 

  • Novice hunter (Hunter A)

 

A hunter identifies an IOC alert on a new file that has run on the domain controller in one of the organization’s business units. The hunter generates the hypothesis that this new file will be found on domain controllers in other business units as well, and sets out to test each domain controller independently.

 

  • Expert hunter (Hunter B)

 

In contrast, a more-experienced hunter also knows from the Crown Jewels Analysis that the research and development network’s data is the most important to the organization. From intelligence reporting, the hunter also knows a new threat group has been stealing proprietary research information from similar organizations, and that the group is known to use malware similar to that found on the domain controller. This hunter, therefore, generates a hypothesis that the IOC is one of multiple files the adversary is using and that sensitive research documents are the adversary’s goal and will likely be exfiltrated off the network via encrypted communications.

In order for the hunter to formulate a good hypothesis, technology that supports the process of answering questions is required. A good hypothesis should be testable. Hypotheses that are not testable are not grounded in reality. This also means that hunters should reevaluate how they are generating and prioritizing hypotheses. Organizations should ensure that there are adequate tools relative to the data that is available for testing. If there are no tools available or the organization lacks the appropriate skills, then these are issues that should be resolved as soon as possible.

Do tools enable the hunter?

Hunters should demand automation while acquiring threat-hunting tools and solutions. A way to determine whether a problem exists within an organization’s security architecture is to check tools against the formulated hypotheses. The tools available within the organization must answer hypotheses, or else there is a technology issue that will need to be addressed. If the question cannot be answered as a result of lack of appropriate data, then there might be a collection issue that will need instant attention. If the hunters are unable to formulate a hypothesis to test, they may be demonstrating an effect of bias or inexperience. A good training resource for hunters that is maintained by other hunters is The ThreatHunting Project.

Automation empowers hunters to establish a repeatable and sustainable process within the organization. Technology also helps lower the barriers keeping organizations from hunting today. There are simply not enough analysts with significant domain expertise to counter all the threats observed today. It empowers threat hunters to equip them with the appropriate platforms that bolster intelligence-driven and awareness-based hypotheses. Through this process, these threat hunters will also become better analysts, gaining valuable domain expertise over time. Successful hunting trips help build more-successful hunters.

Conclusion

In this article, we have discussed the most popular threat-hunting methodologies and how they compare. Hunters have a lot to bear in mind when conducting hunts, and these concerns have been reviewed as well. We also discussed the part of threat-hunting tools, emphasizing that hunters must do more than entirely rely on tools. Rather, an effective hunt must involve the necessary tools that allow for automation, thus allowing the hunter the additional space to create stronger hypotheses.

Sources

The ThreatHunting Project

Threat Hunting for Dummies, Peter H. Gregory. Accessed 06/29/2018,
https://www.afcea.org/signal/resources/

Be Safe

Section Guide

Lester
Obbayi

View more articles from Lester

Motivate Your Workforce to Care About Security! Transform end user behavior with 1,200+ SecurityIQ awareness training tools

Section Guide

Lester
Obbayi

View more articles from Lester