Machine learning for social engineering
Social engineering is a form of communication that aims at gaining people’s trust at first and then prompting them to share sensitive information or do something the attacker would take advantage of, such as opening an infected file or clicking a malicious link.
When it comes to social engineering, the goal is to hack humans, not computers. Social engineering scams have a lot in common with marketing automation technology. Of course the main difference being that one is legitimate, the other is not.
Oftentimes social engineering can render even the best technological measures useless, and brute force isn’t needed either. To give you an example from real life, a man alone gained the trust of employees of ABM AMRO bank in Belgium over the course of a year by merely being charming. One day the employees gave him access to security boxes and he stole 28 million dollars in diamonds.
Specifics related to ML tools for social engineering
Nowadays due to technologies like artificial intelligence (AI) and machine learning (ML) and Internet of Things (IoT), there can be created tools to gather public information on people for the sole reason to spear phish them, or even forge their voices to impersonate them – all done automatically, at scale. Actually, the fact that the whole process can be automated and highly targeted, like a smart missile, at the same time is perhaps the most disturbing aspect of this ML enhanced type of social engineering.
From social engineers’ point of view, ML models can be used, for instance, to train AI and its algorithms to target specific types of files so that they can zone in on the metadata of these files. Attackers can use type-specific ML classifiers to train their models. In doing so, the tools will be honed to perform certain engineering actions and even learn to improve them. K-means, random forests and neural networks are several of the various clustering and classification methods that can be utilized in combination with the NLP analysis applied to a victim’s social network posts, for instance.
ML is well acquainted with email spam, so to speak, as it might be one of the first avenues for cybercriminals to use this technology for nefarious means. The bad guys could save a lot of effort and time if let a neural network, trained in advance, do the heavy lifting in terms of well-crafted junk emails, instead of generating spam texts manually.
AI and ML can grasp the general context and simulate natural writing styles. Algorithms can study how a person writes, talks, his habits, contacts etc. through information available on social networks, and not only. Attackers can use ML, for instance, through image recognition tools in case they possess a picture of the victim or at least know how he or she looks like in order to find social media accounts related to him or her. A company called Trustwave for one has a tool called Social Mapped that can perform all of this.
Types of social engineering attacks that might benefit from ML
Phishing, baiting and pretexting are some examples of social engineering attacks that can work with ML, and email attachments, removable drives, drive-by web downloads and browser exploits are popular means for delivering these social engineering traps.
A famous case of social engineering was what happened to Ubiquiti Networks, an American service provider of networks for businesses, in 2015 when cybercriminals presenting themselves as executives of the company asked some employees via email to transfer big amounts of money to a bank account belonging to the cybercriminals. Being too helpful – a trait of the human psyche that was exploited in this situation – the employees recognized the social engineers as superiors and transferred willingly 39.1 million dollars.
What is perhaps the scariest thing about AI is that it can learn through constantly increasing enhanced ML capabilities to impersonate very good personal features of human beings you may know. For example, ML can teach an algorithm to escalate vishing attacks by analyzing a voice’s pattern for a given period of time and then recreating the voice in a very convincing fashion.
This type of social engineering has already born fruit – hackers managed to steal $243,000 by using AI-driven voice technology which allowed them to impersonate a UK business owner.
(Twitter) Spear phishing bot / Honey trap
Apparently bots are able to or soon will have the capabilities to learn to automatically create believable conversations with software-generated people that look completely realistic. ‘Honey trap’ is a kind of social engineering where the illusion of an attractive person is devised by the attacker to strike up an online romantic relationship with a victim to make him or her share with him sensitive information. Ashley Madison data breach is a case study worth mentioning in this category, since this dating company used bots pretending to be real people, females in most cases, to seduce clients, convincing them eventually to pay subscription fees, among other things.
ML can analyze what kinds of cybersecurity threats would be related to the targeted user in order to convince him to purchase malware protection software. This is a cybercrime that preys on victim’s shock, anxiety and perception of a threat. Every aspect of the COVID-19 crisis is being exploited by cybercriminals, for example, because these are fearful times for many people. The FBI pointed out the enormous surge of cyberattacks – nearly the same number of complaints up to May 28, 2020 compared to the entire 2019 – in their press report on the cybercrimes during the first half of 2020.
Other types of ML-based social engineering techniques are:
- Voice impersonation
- Speech recognition
- Facial recognition
- Deepfake creation
Evidently, ML can potentiate virtually every social engineering attack, since it allows criminals to collect information on companies, affiliates, third parties and employees easily and quickly. “We’re going to live in a world of AI-enabled smart attacks,” concluded HP Fellow and Chief Technologist Vali Ali for HP’s report “Cybersecurity Guide: Hackers and defenders harness design and machine learning”.
In fact, technologies like AI and ML are two-edged swords – they can be as good for cyber attacking as they are for cyber defense.
The ubiquitous proliferation of AI has made an impact on every industry, with cybersecurity being one of the most affected. Nevertheless, between the physical (hardware) and cyber (software) realm, there are always human beings. Regardless of how advanced technical security measures a company has in place, it all may be reduced to nothing if the individuals that operate them are being “hacked” instead. Therefore, building cybersecurity culture is a must for every self-respecting organization, and the way to do it is through training and awareness programs, among other things.
ML cannot compensate for originality, abstract thinking and intuitiveness that stem naturally from the human thought process. Not yet, and maybe not anytime soon. Know that it is likely that the human factor in the context of cybersecurity is not the problem, it is part of the solution.
7 ways in which cybercriminals use machine learning to hack your business, Gatefy
A comprehensive survey of AI-enabled phishing attacks detection techniques, Springer Link
Machine Learning for Cybercriminals 101, Medium
Experts Warn AI And Social Engineering Lead To New Digital Scams, SmartDataCollective
Machine learning and social engineering attacks, CSO
Machine learning vs. social engineering, Microsoft
Redefining the Approach to Cybersecurity, National Center for Biotechnology Information, U.S. National Library of Medicine
Security and Privacy considerations in Artificial Intelligence & Machine Learning — Part 5: When the attackers use AI., Medium
Social Engineering, Imperva
Social Engineering Prevention, Cynet
The Rise of Machine Learning and Social Engineering Attacks, Social-Engineer
Top 5 Social Engineering Techniques and How to Prevent Them, Exabeam
What does a world with automated social engineering look like?, Medium