Is AI the cybersecurity skills shortage silver bullet?
As cyberattacks continue to garner more attention and cybercriminals grow more brazen, it is only expected that organizations feel increased pressure to bolster their cybersecurity programs.
Protecting an organization’s network and improving its defenses relies on the efforts and experience of security professionals. They learn from past attacks and work to bolster defenses for what lies ahead. However, the number of skilled cybersecurity professionals is well below the number of job vacancies. According to one study, nearly three out of five cybersecurity professionals consider the labor shortage to be either “serious” or “very bad.” This matches a 2019 U.S. Bureau of Labor Statistics study of the job outlook for information security analysts, which reported demand was expected to grow by 31% over the coming decade. Cyber Seek calculated there were over 300,000 unfilled cybersecurity roles in the United States; the supply and demand ratio for existing cybersecurity workers to job openings is rated as “very low.”
Tired of being at a disadvantage, some organizations are beginning to rely more and more on artificial intelligence (AI) and machine learning (ML) as at least a supplement, if not a leading part, of their cybersecurity posture. According to security firm Capgemini, 80% of the companies they surveyed are looking toward AI to help identify threats and thwart attacks.
What does this growing trend mean for the larger cybersecurity industry and, more importantly, an individual organization’s level of preparedness for a cyberattack? Could AI be the silver bullet to the cybersecurity skills shortage?
Evaluating automated tools’ role in cybersecurity
With organizations such as SlashNext and Osterman Research claiming, “The strategy of hiring increasing numbers of cybersecurity professionals with the right skill set is dead,” it is obvious why organizations are increasingly turning toward AI and ML. But just how organizations should reallocate the roles and methods they use for their cybersecurity is going to be a key question for the future. To do so, they need to understand the current strengths and shortcomings of the technology so they can make an informed decision.
Here are just a few of the key arguments for and against AI and ML as that “silver bullet.”
Where AI can help overcome the cybersecurity skills shortage
According to one Cisco study, over two-fifths of respondents agree that they have settled into a “response-only mode,” where they are only able to react to an attack and perform remediation instead of handling proactive defensive tasks. With this as the reality for many organizations, it is easy to see why AI and ML are seen as ways to overcome the skill shortage. A 2017 MIT Sloan Management Review study found that 20% of organizations already have implemented AI systems or processes and that 70% of these organizations’ executives see AI playing a “significant” role in their operations for at least the next five years.
It is easy to see why: AI and ML can work around the clock, constantly monitoring for deviations from their norm and automating acting per their “training.” They also bring the ability to orchestrate across systems, initiating follow-on actions to consolidate data, generate an alert, close down a port or make other “decisions” within their scope. While security professionals have physical and time limits, AI and ML can constantly perform at a high level. With a looming and seemingly never-ending cybersecurity skills shortfall, combined with a limited skills pipeline, AI is a legitimate option.
AI is seen as a way to better utilize the time and energy of the cybersecurity staff organizations already have, while also using the strengths of AI to reduce the attack surface and monitor systems on their behalf. AI and ML can already leverage their advanced algorithms to complement their human partners with quick data consolidation and fast, accurate analysis, drastically reducing the effort required to complete the same tasks when just using manpower. AI can track and crunch more data with more variables much more quickly.
This functionality is already in place in network monitoring tools, advanced firewalls, data loss prevention tools and more, both on-site and in the cloud. These repetitive yet time-intensive tasks leverage the strengths of AI while also freeing up cybersecurity professionals to handle more managerial, strategic and one-time tasks. There are some AI tools integrated into incident detection systems that can help to identify new malware and phishing campaigns based on the unique or abnormal behaviors that these risks present on a network, drawing on logs from many systems simultaneously and nearly in real-time. And, once something is flagged, these AI-enabled tools can be instructed to automatically take actions such as cutting off a connection or threat, or begin the early stages of incident remediation.
Where AI falls short as a security solution
One of the ways to underscore the risks involved in relying too much on AI and ML can be found in a May 2018 New York Times report. Researchers in the United States and China were able to successfully take over AI systems developed by Amazon, Google and Apple to perform unintended tasks, all without the knowledge of the systems’ owners or developers. Could this already be occurring with other AI systems? What else could an AI system be commanded to do, especially if it has specialized privileges and access?
It is not a huge leap to think of a cybercriminal taking the same steps to target an AI-enabled cybersecurity tool or system and using it to their advantage, or even to attack other organizations without the owners knowing. Sadly, this is not theoretical: More than 90% of security professionals surveyed in the United States and Japan are aware of the possibility of cybercriminals using AI to perform attacks. This is because, as The New York Times article points out, for all of AI’s benefits, there are also very substantial risks to overcome.
Most of this risk comes in how AI itself works at a fundamental level; AI and ML work by being “trained” to behave in a certain way based on commands and data supplied. AI then builds on these causes and effects with additional data and “experience” as they work. This “learning” behavior can thus be manipulated by cybercriminals, causing AI to overlook or not respond to certain malicious behaviors — unknowingly to the security professionals the AI supports — or even aid in their own exploitation unless thwarted by other network defenses.
Another risk comes in the scalability and commercialization of AI. Just as other security tools and systems have known and shared vulnerabilities, the algorithms and behavior of AI can also be shared on the dark web. Once exploited, malicious behavior can go unnoticed by the AI tool until patched or manually evaluated, yet another manual task.
However, this manual evaluation of the inner workings of AI — and what is supposed to be there and what may be malicious — can be hard to pinpoint without specialized training. The algorithms, decision trees and criteria used to control AI may not be immediately transparent by those with an untrained eye. This leaves security professionals back at square one, where potential violations could go unflagged or, even if they are identified by AI, the reasoning could be hard to interpret leading to more questions. Could this ultimately lessen an organization’s trust in AI as a method to improve the security of their systems?
The bottom line: Encouraging a blended solution
The cybersecurity skills shortage is not going away anytime soon, and the role AI is going to play in filling this need is only going to increase. It is important to emphasize the need for organizations to find a balanced, blended solution that meets their needs, current and future threats and current capabilities.
Leveraging AI technology can automate basic level security tasks such as those already performed by self-service solutions and identify and access management software. Taking tasks like resetting passwords, routing and prioritizing tickets and monitoring traffic and data for anomalies off a security professional’s plate can give them an estimated 20% more time in their day. The same is true for other tasks that are highly repetitive and full of manual data review, such as reviewing network alerts or logs. AI-enabled security tools can use other enterprise data to more quickly and accurately highlight priority issues. Facts like these are hard to ignore or pass up when resources are already spread thin.
But, before organizations get too excited about leveraging these tools too much, they should proceed with caution and continue to use the same security and safety protocols many already have in place and are advised by best practice. For example, using “defense in depth” is essential. Use multiple systems to double-check the work of another, perform regular manual audits of AI tools and the data they are utilizing and create processes where key decisions involve at least two people or processes with different policies and instructions guiding the decision-making process. Combined, these protocols can help to ease and strengthen the transition toward using and trusting AI more as a partner in cybersecurity.
While AI and ML may not be the cybersecurity skills shortage silver bullet just yet, there is still hope for the future and how technology can aid security professionals to do their jobs more effectively and comprehensively.
For example, beginning in 2014, the United States Defense Advanced Research Projects Agency (DARPA) announced its first annual DARPA Cyber Grand Challenge, which brought together security professionals, hackers and researchers to develop more automated methods to identify security vulnerabilities and flaws and use AI to develop and deploy fixes in real-time. While still early on, this sort of investment is key to developing the next evolution of AI in the cybersecurity domain.
Similarly, organizations have begun to think about how people should interact with and govern AI and ML in the workplace. The consulting firm BCG has developed some key questions that organizations should use to develop policies or guidelines for how they use and monitor AI and ML in their enterprise:
- Have we made conscious decisions about who (or what) can decide and control which capabilities? Did we assign AI systems an appropriate responsibility matrix entry? Do we constrain AI to decision support or expert systems, or do we let AI programs make decisions on their own (and if so, which ones)
- Do we have the appropriate governance policies and an agreed code of conduct that specify which of our processes or activities are off-limits for AI for security reasons
- When using AI in conjunction with decisions on cyber-physical systems, do we have appropriate ethical, process, technical and legal safeguards in place? Do we have compensating controls? How do we test them
While the cybersecurity skills shortage still seems quite daunting, the role that AI and ML can play — and has already begun to play — in filling that gap has a bright future. It will come down to organizations not only consciously and precisely using the power of these tools to their advantage, but also understanding their weaknesses so they can adjust their defensive posture accordingly and develop policies and guidelines to help mitigate them.
The New York Times
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.