Introduction

Eric Stevens, vice president of engineering and principal architect at ProtectWise, gave the Cyber Work with Infosec podcast an insider view of the use of artificial intelligence (AI) in cybersecurity. Eric talked us through his own experience in AI, as well as some of the findings of ProtectWise’s latest report about AI in cybersecurity, titled “The State of AI in Cybersecurity.” The report was commissioned by Osterman Research.

Here are some of the key notes from Eric Steven’s interview.

Where does AI fit in the cybersecurity industry?

AI has become a very broad term, and we’re seeing lots of marketing confusion. When people think of AI, they think of things like machine learning and deep learning. But they can also think of chatbots: although they don’t use automated learning, they can still come under the heading of AI. Because of this overlap, our research allowed the respondent some leeway in terms of AI definition. 

The enablement of humans using AI is key. Identifying anomalous behavior is one such use. For example, AI can be used to look at data that would otherwise be defined as a privacy violation if a human did so.

The automation of cybersecurity investigation and response ultimately saves security analyst time.

Who is using AI security products?

Companies of all sizes are using AI in a cybersecurity setting. However, the research ProtectWise carried out looked at organizations with a thousand employees or more.

An interesting finding from the research was that small security teams are as likely to use AI-based security tools as their larger counterparts. The deciding factor in the choice to move to an AI-based system is the alert volume size. This is likely due to the need to handle large volumes efficiently.

Companies, on the whole, see the executive class driving the move to AI. This may be because it is seen as a leader function to lead the charge and not to miss opportunities. There is also an element of the need to cover all bases and to show confidence that the company can get value out of these tools.

The report found that 73% of organizations have some level of AI. 60% believe AI makes for faster investigation of alerts.

The report identified that in 55% of organizations, executives were the strongest advocates for the use of AI.

Should security analysts worry about redundancy?

The research found that most security analysts are not worried about redundancy; the state of the art in cybersecurity is about the power of humans, more pressing concerns include the cybersecurity skills shortage, and so on. AI replacing jobs simply isn’t a concern at this point, as AI is about automating low-level processes. There is still some way to go before AI tools are in any position to replace jobs in the cybersecurity industry.

The report found that 54% of respondents said that AI results are inaccurate.

How can AI be used as a security threat?

The biggest concerns and most difficult-to-control issues in the security ecosystem are the absence of controls over adversarial machine learning. If you can synthesize data that can poison the results, you can get the algorithm to give the wrong answers. The results are false negatives or false positives. 

If you can convince a firewall that an event represents an attacker, the system can be closed down, causing major disruption. A human operator might be more able to take a measured view when making such a decision.

What are the limitations of AI-based security tools?

False positives are a big issue. Over half of respondents in the report felt that AI-based rules set too high a bar. The resulting false positives caused issues. In addition, two-thirds felt that no AI solutions on the market offer zero-day controls.

One of the most difficult issues presented by using AI tools was result interpretation. Often, the respondents found that it was difficult to reason through the results which often don’t come with an explanation. This made the results inactionable.

The learning curve of AI?

Cyber Work asked Eric if the deficiencies in AI cybersecurity tools will level out over time.

The answer was that it is likely to improve over time as larger datasets come down the line. To improve AI tools, better controls must be offered to security analysts to set thresholds on false and positive negatives. 

From the report, the number one complaint is alert volume. So it seems that the original problem that companies turn to AI is not yet resolved.

Future of AI-based security

Data is the new oil, and companies with large datasets will do a better job of using AI-based security. The false positive issue needs to reach manageable levels. This threshold for acceptable failure rates will be helped if the problem is better understood. Otherwise, this makes it hard to apply AI to problems, as the end result is a subjective analysis. Ultimately, however, you cannot be successful with AI if you don’t have the data.

Will AI always be subservient to the security analyst?

The dirty secret is that behind closed doors, companies are looking to replace humans with machines to improve profits. Some job functions are more susceptible to this and this is also not unique to AI; the fourth industrial revolution has this element to it.

Where there are job functions that can be replaced by AI, people can usually find better options. Corporate profitability will inevitably progress this.

Cyber Work asked Eric: Which skill sets are recommended to avoid the “rise of the machines”?

Eric responded that AI is about enabling analysts. Security analysts can feel safe for a while to come, especially if the transition is slow, and so far, this has been true. Natural solutions are likely to present themselves. However, in the security industry, humans are being empowered rather than being replaced.

Other uses of AI outside of cybersecurity

The offensive use of AI may seem fantastical, but the attackers will use this in less than five years. We will see it as spearphishing; cybercriminals will use AI to make a very believable phishing campaign.

As data becomes more public-facing, AI will be able to look at weak points in an organization and learn how to use that data to identify organizational vulnerabilities.

AI is also entering our personal lives through consumer products like Amazon Echo, Google Home and so on. This generates privacy concerns.

What is ProtectWise working on?

ProtectWise is a network detection and response company that offers cloud-based solutions.

Known and unknown attacks offer rich investigation and forensic tools. Response tools allow for pattern recognition. ProtectWise believes in improving the way that security analysts interact with data.

Currently, ProtectWise is working on building datasets. They have a significant amount already, but more data allows for further improvement in detection. They want deep data to improve the ability to detect the unknown rigorously and scientifically.

They also realize that we need solutions to work in tandem with AI. ProtectWise brings traditional and AI solutions together, synergistically

You can sign up for a test drive on the ProtectWise website.

Watch our interview with Eric Stevens here.

 

Sources

  1. The State of AI in Cybersecurity, ProtectWise
  2. The Current State of Artificial Intelligence in Cybersecurity, Infosec (YouTube)
  3. How to become a Security Analyst, Infosec
  4. Request Demo of The ProtectWise Grid™, ProtectWise