Artificial Intelligence (AI) – Machine Learning (ML) – Deep Learning (DL)
Artificial intelligence as a term dates back to the 1950s, but only recently is coming into prominence because of the rapid advance of science and technology in the last decade or so.
AI is software (or a robot) that can display some sort of a cognitive, learning, and decision-making process akin to a real human’s cognitive, learning, and decision-making process. AI is a contradiction because it strives to make man-made appliances mixed up with software solve simple problems, most of which are mundane, human tasks, while at the same time constantly trying to push the technology barrier beyond human capability. Concerning this point, let us see the 3 types of AI:
- Artificial Narrow Intelligence – as of this writing, and as far as I am aware of, this is the only AI that humanity has achieved. These AIs are good at performing a single task, for example, playing chess or image recognition. Google’s translation engine is a product based on ANI.
- Artificial General Intelligence – that is a human level intelligence, and it would be a giant leap for the robots, and hopefully for humans, too. Nevertheless, AGI is a very difficult state for computers to achieve because they must acquire abstract thinking skills and be able to come up with creative ideas.
- Artificial Super Intelligence – it is the phase where AI is much more creative and smart than any human brain, perhaps even the brain power of all human beings combined. While the distance between ANI and AGP may be huge, the majority of scientists consider the transition period between AGI and ASI would be very short – an almost instant development of the world’s most advanced species.
In the light of the fact that now robots can compose classical music, the philosopher of music Stephen Davies, of the University of Auckland in New Zealand, concludes:
“People said computers would not be able to show the same original thinking, as opposed to crunching random calculations. However, now it is hard to see the difference between people and computers concerning creativity in chess. Music, too, is rule-governed in a way that should make it easily simulated.”
Source: “Iamus, classical music’s computer composer, live from Malaga” by Philip Ball
Whether the development of full AI will be beneficial to Homo sapiens is a matter of great dispute. That is not the focal point of this writing, however.
Due to machine learning (ML), an important subset of AI, computers can recognize patterns in data and learn new things like a human being. AI algorithms make use of ML to sift through enormous amounts of data and adapt over time. Thereby, one could reveal unknown facts that are the result of a thorough examination of all relevant information sources. Deep learning (DL) is a subdomain of machine learning. It is essentially another, more advanced level of ML based on a layered structure of algorithms dubbed an artificial neural network, which is something like the technical equivalent of the human brain’s biological neural network.
An interesting amalgam of technical and public policy researchers from distinguished universities published a 98-page study on AI in February 2018. There was no consensus among the paper’s creators on whether the massive automation, which will result from ML products, would cause widespread unemployment or other social dislocations.
It is estimated that there will be 3.5 unfilled positions in the cybersecurity sector by 2021. Another study conducted by the Center for Cyber Safety and Education and (ISC)2 prognosed a 1.8 million shortfall among cybersecurity professionals by 2022. The existing workforce will have to work longer or harder – or both – to compensate for so many unoccupied positions, but even now IT professionals work on an average 52 hours a week.
AI may reduce these long hours. Given that the said technology is successfully used to detect simple attacks and threats, this will further help security personnel to focus on more complex cases only. In this regard, Steve Grobman, a CTO at McAfee, thinks that AI will reduce the need for corporations to have big armies of cybersecurity staffers as opposed to making human cybersecurity profession distinct.
The notorious security expert Bruce Schneier seems to believe in the ability of autonomous security systems to provide something that is now missing in the realm of IT security, a new (artificially generated) perspective. He also seems to be an optimist concerning the anticipated labor-market disruption that AI is expected to cause:
“We’re a long way off AI from making humans redundant in cybersecurity, but there’s more interest in [using AI for] human augmentation, which is making people smarter. You still need people defending you. Good systems use people and technology together.”
Source: “AI will transform information security, but it won’t happen overnight” by Doug Drinkwater
With his Neural Lace project, Elon Musk (the founder and CEO of Tesla and Space X) is a distinguished figure in the field of human augmentation. His invention intends to equalize the future discrepancy between a human and a full-fledged autonomous AI.
Training the AI
AI algorithms are not independent, self-sufficient, self-aware, and neither are they self-thought. Since the AI technology is yet to mature, it presents no panacea at the moment with respect to some sophisticated cyber-attacks. Moreover, AI is in its infancy, and like every infant, it needs some tutoring (but also parenting and supervision) to grow to its full potential and know how to behave reasonably and be respectful of humans and other forms of existence. Therefore, the AI could live up to its true potential only if there is a trained team of human analysts by its side.
A skilled workforce is needed to oversee this emerging technology, as well. The ML trends to produce a significant number of false positives, which inevitably increases the security team’s workload, but after a while, it starts to make a clear distinction between the normal and unusual activities. One weak point in the whole ML scheme is the fact that AI draws its intelligence from the data it analyzes. Thus, if the data is corrupt, we cannot ignore expert computer brilliance.
Defenses: Manual vs. Automatic
According to the CEO of Coalfire, a company that specializes in cyber risk management and compliance services, “[t]he future will see self-healing and self-defending networks, which can leverage AI to take steps to fight and defend the network.” /Source: “AI’s Future in Cybersecurity” by Pedro Hernandez/
To put it concisely, a manual process simply cannot reliably spot anomalies in cloud environments that consist of thousands of transient containers each of which may generate thousands of events per minute.
“Unfortunately, a lot of organizations still depend on manual process — this will have to change if systems are going to remain secure in the future,” says Martin Ford, futurist, and author of ‘Rise of the Robots.’
Source: “I, for one, welcome our new chatbot overlords” by Evgeny Chereshnev
The majority of machine learning inventions are trained in a manner similar to an anti-virus program with malware samples being compared with signatures in a database. Unfortunately, because this equates AI with advanced anti-virus software, it represents no evolution at all.
According to George Avetisov, the CEO and Cofounder of biometric security company HYPR, companies have to let go of the rules-based engine finally they relied on for decades. What makes machine learning technology so strong is its ability to learn about threats in real-time, thus adapt even to threats not previously existed (the so-called zero threats).
Magical Defensive Capabilities of AI
Learning capabilities based on neural networks is the AI’s way of conducting entity and pattern recognition for intrusion detection or digital forensics purposes. What is best of all is the lighting speed at which these cutting-edge solutions can classify entities and events, as well as analyze the malicious behavior behind cyber intrusions. By identifying and prioritizing vulnerabilities for preemptive protection, AI could make risk and threat assessment easier. This would certainly allow for a thorough network traffic inspection and eventually blocking harmful traffic right on the spot.
The ability to detect abnormal patterns could prove useful in various areas: encrypted web traffic, IoT and cloud environments, to name a few. Encryption is great security-wise, but it can be a conundrum to companies that desperately seek to protect their information assets because through a cloud of encrypted data attackers are trying to conceal their command-and-control activity. In fact, Cisco inspected malware samples over a year and revealed more than a threefold increase in encrypted network communications. The ML technology can study traffic patterns and learn over time to distinguish malicious traffic from a legitimate one.
The cybersecurity areas in which AI is presently mostly applied are malware detection, anti-phishing prevention, and blocking brute-force attacks. To avert phishing scams, for example, AI could analyze variables of importance such as location data, wording, grammar, semantics, and IP addresses.
People and AI could Complete Each Other Instead of Compete with Each Other
Jacob Sendowski, senior product manager at automated threat management specialist Vectra, consider that there will be an excellent symbiosis between the two leading species of the future:
“AI-enabled security solutions will become integral components of the security team as they can birddog high-threat hosts to a human analyst team[.] Skilled human analysts will be critical in the incident response process, reviewing evidence and directing an investigation based on the indications that AI tools provide.”
Source: “AI’s Future in Cybersecurity” by Pedro Hernandez
With the help of AI, the reaction time in the context of security events will decrease, analytics team will receive better fine-grained data analysis, and the autonomous technology can use statistical models to predict or anticipate threat behavior.
Being deployed in a security operations center (SOC), artificial intelligence solutions will help with checklist-driven activities normally entrusted to leading, experienced analysts who will examine logs for indicators of compromise. Unlike the analysts, however, these solutions will not become tired or be weary of repeating security tasks.
A thorough evaluation of all security events is almost impossible from a human point of view, since a security analyst can review on average only between 10 and 20 critical security incidents per day, according to IBM. Neural networks in particular excel in monitoring a system’s processes. ML could study the behavior and intent of even benign looking threats.
The great number and diversity of new malware strains that crop up every day is astounding – 250, 00. Gartner made one very curious prediction – every new software product released by 2020 will contain some ML feature. Self-defending and self-healing networks are the future – every major IT company inter alia have jumped on the AI bandwagon. On the other hand, AI can bring about unknown, and unknowable, problems.
Ethical Hacking Training – Resources (InfoSec)
The following paragraph serves as a climax of this short story:
“The limitations of machine-based analysis have also emerged. While machines can detect known malware executables and simple unknown ones, they cannot analyze complex unknown malware files, which numbered almost 75,000 last week. Complex unknown files require expert human analysis.”
Source: “Comodo Threat Intelligence Lab Update – Cyber Monday” by Comodo
As the technology reporter Nick Ismail concluded, AI could be a good base on which human intelligence can build on. In reality, to pair ML with human intelligence is what makes sense, at least in the short term. In the end, humans will oversee not only the AI but also the data on which AI bases its conclusions. Furthermore, “intuition” is a card that can be played only by human beings (disclaimer: this may change in not so distant future, too). What a machine learning product may be lacking in the first stages of its development is a “humanized touch;” or “Just a little of that Human Touch” if we trust the Boss on what is important in life.
Rather than doing all the work (and thus dispensing from these unevolved creatures called humans), AI and automation will most likely work better as a counselor whose specialty is crunching big sets of data and presenting them in readily available threads of intelligence. This would undoubtedly support organizations in their day-to-day activities and also strengthen already existing teams through the mysterious ways the most advanced technologies work.
We all know how devastating a security breach can be. Unfortunately, employee/third-party error or behavior is the root cause of the large majority of data breaches, and increased automation is something that may change things for the better.
More and more cybersecurity experts have a recourse to automation to improve their job performance. Despite that fact, an ESG research indicates that no more than 30% of interviewed IT security professionals feel confident about their knowledge of ML and its practical application to information security analytics. With so many firms that still do not even perform regular patch management, AI remains an overly avant-garde concept for some people that is regarded more as a luxury than as something of perceivable importance.
Let us not forget about the never-ending arms race between good and bad guys. Since there is enough evidence that technologies such as deep learning neural networks have been employed by cybercriminals for quite some time, it is somewhat imperative that white hats need to employ such techniques as well if they want to not lag behind this crucial trend in the cyber arms race.
AI is not just a distant future hype; it is product already in use by many businesses. The percentage of enterprises that were using AI in 2016 (38% ) will grow up to 62% by 2018, according to a Narrative Science survey. Also, Forrester Research predicts an astounding 300% increase in AI investment on a yearly basis.
Machine learning technologies can work either as helper apps or in a stand-alone fashion. One security researcher sees the initial role of AI as “an intelligent assistant.” The big question is: Will AI products continue to be content with their role of (extremely) “intelligent assistant” after they become very intelligent?
Maybe the biggest problems in the context of proper “communication” between humans and self-aware robots is the misinterpretation of commands by the latter … however, which is ultimately a fault of the former. It is something that can occur not only in movies and sci-fi novels, but also in real products such as Alexa, Google Assistant, Siri, and Cortana. Maybe the key to a bright future ahead of us is in mutual trust and good relations between the two parties throughout the entire development cycle of AI.
Armerding, T. (2017). Can AI and ML slay the healthcare ransomware dragon? Available at https://www.csoonline.com/article/3188917/application-security/can-ai-and-ml-slay-the-healthcare-ransomware-dragon.html (16/04/2018)
Ashford, W. (2018). AI a threat to cyber security, warns report. Available at http://www.computerweekly.com/news/252435434/AI-a-threat-to-cyber-security-warns-report (16/04/2018)
Ball, P. (2012). Iamus, classical music’s computer composer, live from Malaga. Available at https://www.theguardian.com/music/2012/jul/01/iamus-computer-composes-classical-music (16/04/2018)
Beltov, M. (2017). Artificial Intelligence Can Drive Ransomware Attacks. Available at https://www.informationsecuritybuzz.com/articles/artificial-intelligence-can-drive-ransomware-attacks/ (16/04/2018)
Brandon, J. (2016). How AI is stopping criminal hacking in real time. Available at https://www.csoonline.com/article/3163458/security/how-ai-is-stopping-criminal-hacking-in-real-time.html (16/04/2018)
Chereshnev, E. (2016). I, for one, welcome our new chatbot overlords. Available at https://www.kaspersky.com/blog/dangerous-chatbot/12847/ (16/04/2018)
Cosbie, J. (2016). Elon Musk Says Advanced A.I. Could “Take Down the Internet”. Available at https://www.inverse.com/article/23198-elon-musk-advanced-ai-take-down-internet (16/04/2018)
Drinkwater, D. (2017). AI will transform information security, but it won’t happen overnight. Available at https://www.csoonline.com/article/3184577/application-development/ai-will-transform-information-security-but-it-won-t-happen-overnight.html (16/04/2018)
Dvorsky, G. (2018). Image Manipulation Hack Fools Humans and Machines, Makes Me Think Nothing is Real Anymore. Available at https://gizmodo.com/image-manipulation-hack-fools-humans-and-machines-make-1823466223 (16/04/2018)
Hawking, S.,Tegmark, M., Russell, S. and Wilczek, F. (2014). Transcending Complacency on Superintelligent Machines. Available at https://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html (16/04/2018)
Hernandez, P. (2018). How AI Is Redefining Cybersecurity. Available at https://www.esecurityplanet.com/network-security/how-ai-is-redefining-cybersecurity.html (16/04/2018)
Hernandez, P. (2018). AI’s Future in Cybersecurity. Available at https://www.esecurityplanet.com/network-security/ais-future-in-cybersecurity.html (16/04/2018)
Ismail, N. (2018). Using AI intelligently in cyber security. Available at http://www.information-age.com/using-ai-intelligently-cyber-security-123470173/ (16/04/2018)
Kh, R. (2017). How AI is the Future of Cybersecurity. Available at https://www.infosecurity-magazine.com/next-gen-infosec/ai-future-cybersecurity/ (16/04/2018)
Oltsik, J. (2018). Artificial intelligence and cybersecurity: The real deal. Available at https://www.csoonline.com/article/3250850/security/artificial-intelligence-and-cybersecurity-the-real-deal.html (16/04/2018)
Reuters (2018). How AI poses risks of misuse by hackers. Available at https://www.arnnet.com.au/article/633676/how-ai-poses-risks-misuse-by-hackers/ (16/04/2018)
Rosenbush, S. (2017). The Morning Download: First AI-Powered Cyberattacks Are Detected. Available at https://blogs.wsj.com/cio/2017/11/16/the-morning-download-first-ai-powered-cyberattacks-are-detected/ (16/04/2018)
Sears, A. (2018). Why to Use Artificial Intelligence in Your Cybersecurity Strategy. Available at https://blog.capterra.com/artificial-intelligence-in-cybersecurity/ (16/04/2018)
Spencer, L. (2018). The compelling case for AI in cyber security. Available at https://www.arnnet.com.au/article/633748/compelling-case-ai-cyber-security/ (16/04/2018)
Stanganelli, J. (2014). Artificial Intelligence: 3 Potential Attacks. Available at https://www.networkcomputing.com/wireless/artificial-intelligence-3-potential-attacks/237102029 (16/04/2018)
Thales Group (2018). Leveraging artificial intelligence to maximize critical infrastructure cybersecurity. Available at https://www.thalesgroup.com/en/worldwide/security/magazine/leveraging-artificial-intelligence-maximize-critical-infrastructure (16/04/2018)
Todros, M. (2018). Artificial Intelligence in Black and White. Available at https://www.recordedfuture.com/artificial-intelligence-information-security/ (16/04/2018)
Townsend, K. (2016). How Machine Learning Will Help Attackers. Available at https://www.securityweek.com/how-machine-learning-will-help-attackers (16/04/2018)
Townsend, K. (2017). Researchers Poison Machine Learning Engines. Available at https://www.securityweek.com/researchers-poison-machine-learning-engines (16/04/2018)
Walker, D. (2018). Mirai ‘Okiru’ botnet targets billions of ARC-based IoT devices. Available at http://www.itpro.co.uk/malware/30304/mirai-okiru-botnet-targets-billions-of-arc-based-iot-devices (16/04/2018)
Zhou, A. (2017). 5 Ways to Avoid Getting Hacked by Chatbots. Available at https://www.topbots.com/5-ways-avoid-getting-hacked-by-chatbots-bot-security-risks/ (16/04/2018)