Machine learning and AI

Cybersecurity and Artificial Intelligence: A Dangerous Mix

Pierluigi Paganini
February 24, 2015 by
Pierluigi Paganini

Artificial Intelligence would be the biggest event in human history

Artificial Intelligence explores the possibility to create intelligent systems that can reason and think like human beings. Software and machines that think and interact with their peers to perform duties usually carried out by humans isn't the trailer of a science fiction film; instead it could be a scenario of a future not far away, and there are many questions about the possible consequences of a massive spread of Artificial Intelligence.

Scientists and experts are concerned with possible implications of Artificial Intelligence. Elon Musk, Stephen Hawking and Bill Gates are just a few luminaries that publicly expressed their concerns about the future of "thinking machines".

Learn Cybersecurity Data Science

Learn Cybersecurity Data Science

Build your skills using machine learning and other cutting-edge tools to perform various cybersecurity tasks.

The potential benefits are huge, but it is quite impossible to predict the evolution of Artificial Intelligence. Part of the scientific community hopes that the future society will be able to eradicate war, disease, and poverty, all thanks to AI, but another wing speculates that a generation of droids could consider human beings at risk for its survival, and for this reason the machines will declare war with the humans.

Figure 1 - iLexx | Getty Images

Stephen Hawking explained that success in the development of Artificial Intelligence would be the biggest event in human history, but the human being could not underestimate the risks.

"So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong," wrote Hawking in The Independent."

In a recent Ask Me Anything thread on Reddit, Bill Gates hypothesized that Artificial Intelligence could not represent a serious menace in the short term. He thinks that the modern society will benefit from the advantages super intelligent machines bring, this will allow the rapid diffusion of the Artificial Intelligence in every industry, but Gates remarked that this diffusion will bring trouble in a few decades.

"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned," said Gates.

Tesla CEO Elon at the MIT Aeronautics and Astronautics department's Centennial Symposium

Musk expressed its position on Artificial Intelligence. He considers it as "our biggest existential threat"; he also urged some regulations for the development of AI.

"If I were to guess what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

The physicist Louis Del Monte share Elon's vision. He believes that Artificial Intelligence could create a new generation of machines that could soon become the most dominant species, since there is a lack of any legislation on the evolution of the AI.

"Today there's no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you're going to see that the top species will no longer be humans, but machines." Said Del Monte ""By the end of this century," he continued, "most of the human race will have become cyborgs [part human, part tech or machine]. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species."

Not all the IT giants fear the Artificial Intelligence. Different is the opinion of Larry Page, Google CEO, whos consider the diffusion of intelligent machines to be positive:

"You can't wish these things away from happening."

Page sustains that the technology will create new job opportunities and will allow us to optimize the resources of planet Earth, with positive repercussions on the global economy.

Microsoft Research's Chief Eric Horvitz, in an interview to the BBC, explained that he believed that intelligent machines could achieve consciousness, for this reason they will not pose any threat to human beings. Horvitz admitted that IT giants are spending an enormous effort in the research on Artificial Intelligence. He also added that among those companies there is also Microsoft.

"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences," he said."I fundamentally don't think that's going to happen … I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."

Artificial Intelligence in cybersecurity

One of the industries that could benefit most of all from the introduction of Artificial intelligence is cybersecurity.

Intelligent machines could implement algorithms designed to identify cyber threats in real time and provide an instantaneous response.

Despite that the majority of security firms are already working on a new generation of automated systems, we're still far from creating a truly self-conscious entity.

The security community is aware that many problems could not be solved with conventional methods and requests the application of machine-learning algorithms.

Security firms are working on this new family of algorithms that can help systems that implement them to identify threats that were missed by traditional security mechanisms.

The security community is assisting with a rapid proliferation of intelligent objects that are threatened by a growing number of cyber threats that are increasing in complexity.

How to protect such a huge number of devices that are always online and that exchange an impressive amount of data with other systems? The response could be represented by Artificial Intelligence. Security firms could be soon able to design new intelligent machines that could implement self-learning algorithms and technologies that can hammer cyber threats.

These algorithms could start from current knowledge to assume infinite attack scenarios and identify the occurrence in real time.

The real innovation is that these algorithms emulate the human brain, amplifying its capabilities through the instantaneous collaboration of a network of intelligent systems that could be able to learn from their experience and, in a short future, to design new machine-learning algorithms.

Imagine an attack scenario in which billions of Internet of Things devices start to attack a target. The only way to operate in real time to mitigate such attacks on a large scale is to have automated machines that are able to detect the offensive in an early stage and to run necessary countermeasures.

The principal problem in handling cyber threats is that the speed of the attacks and the amount of data that must be analyzed to respond in real time.

It's clear that defense cannot be handled by humans without a considerable level of automation that, due to the dynamic nature of the threats, must evolve in time.

In this scenario, the application of processes of artificial intelligence could represent a winning choice, because it could help to dynamically reply to cyber threats by implementing learning capability in software.

The majority of governments today are working toward the development of a new generation of lethal autonomous weapons systems that can search for a specific target, assess its defense measures and choose the best option to run a cyber attack based on the final purpose of the offensive, sabotage or cyber espionage.

This is the new frontier of information warfare: the asymmetric, instantaneous nature of the conflicts requests the adoption of autonomous systems that could act in response to an attack in an early stage.

The principal problem is that information warfare still lacks a shared law framework that establishes the rules of a cyber conflict.

What is a cyber weapon? When it is justified the use of a cyber weapon and what are the rules of engagement? What response is allowed in case of attack?

Let's think to the adoption of a system with artificial intelligence: who would be responsible if the answer to a cyber attack is not proportionate to the offense?

Who will be accountable if the intelligent machines violate international law? The Geneva Conventions are unclear due the grey zone related to the attribution of human intervention in the decision process of the AI systems. What is the limit of the human's control in the case of an instantaneous attack?

Another element to take into serious consideration when approaching Artificial Intelligence is that any application implementing AI is not different from any other software, and for this reason, it could be affected by vulnerabilities. A cyber attack against AI systems could cause serious damages due to the nature of these systems. AI algorithms are designed to make high-stakes decisions in real time. Let's consider, for example, an AI system used in finance. The risks associated with a cyber attack are very high, and an incident can cause economic damages resulting from significant decisions taken by the hacked computers before a human can notice and react.

The principal risks related to the robustness of systems implementing Artificial Intelligence algorithms are:

  • Verification: It is difficult to prove the correspondence between formal design requirements due to the dynamic evolution of such kind of solutions. In many cases the AI system evolves in time due to their experience. This evolution could make the system no longer compliant with initial requirements.
  • Security: it is crucial to prevent threat actors that could manipulate the AI systems and the way they operate.
  • Validity: ensure that the AI system maintains a normal behavior that does not contradict the requirements defined in the design phase, even if the system operates in hostile conditions (e.g. During a cyber attack or in presence of failure for one of its modules).
  • Control: how to enable human control over an AI system after it begins to operate, for example to change requirements.
  • Reliability: The reliability of predictions made by AI systems.

Recently a team of researchers from the University of Oxford conducted an interesting study to evaluate the principal risks that threaten human civilization and that could potentially destroy the human race. Their recent report pointed towards 12 major risks that threaten to end the human civilization. These researchers organized the risks in the following four categories:

  • Current Risks (i.e. climatic changes, Nuclear war);
  • Outer risks (Exogenous) (i.e. Asteroid impacts);
  • Emerging Risks (man made) (i.e. Synthetic biology, Artificial Intelligence);
  • Global policy risk, which would mean an overall bad global governance.

As explained by the experts, while all the 12 risks are infinite threats to the human race, the category of Emerging risks is considerably the most dangerous due to the unpredicable evolution of the threat.

"Artificial Intelligence (AI) seems to be possessing huge potential to deliberately work towards extinction of the human race. Though, synthetic biology and nanotechnology along with AI could be possibly be an answer to many existing problems however if used in wrong way it could probably be the worst tool against humanity," states the study.

Intelligent systems are able to "perceive" the surrounding environment and act to maximize their chances of success. For this reason the "extreme intelligences … are difficult to control and would probably act to boost their own intelligence and acquire maximal resources for almost all initial Artificial Intelligence motivations."

In this scenario, human beings may compete for the same resources, triggering an unpredictable reaction from AI systems. AI systems will be adopted in critical processes. Let's think of its application in the information warfare context. The risk that these solutions will be out of control is serious and can cause serious consequence, including the extinction of the humans.

A look to the future to prevent malicious AI

The development and the adoption of AI systems seems to be impossible to control due to the enormous advantage that the paradigm could bring to every industry.

I desire to close this post highlighting the observations made by the scientist Steve Omohundro, who wrote a paper that identifies three ways to malicious AI systems. The three ways are:

  • To prevent harmful AI systems from being created in the first place. It is desirable that scientist could be able to carefully program intelligence machines with a Hippocratic emphasis ("First, do no harm").
  • To detect malicious AI early in its life before it acquires too many resources. Monitor the evolution of such systems over the time by measuring the processes implemented by the AI systems and the resources that is continuously consuming.
  • To identify malicious AI after it's already acquired lots of resources. It is essential to maintain the human control over the machine even after the AI systems has already acquired a significant amount of resources.

The lesson learned is:

"do not create conditions of competition for any resources between humans and machines ... will ever be possible?"

References

http://uk.businessinsider.com/bill-gates-artificial-intelligence-2015-1?r=US

http://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/

http://www.businessinsider.com/stephen-hawking-on-artificial-intelligence-2014-5?IR=T

http://www.businessinsider.com/elon-musk-artificial-intelligence-mit-2014-10?IR=T

http://www.bbc.com/news/technology-31023741

http://www.businessinsider.com/louis-del-monte-interview-on-the-singularity-2014-7?IR=T

http://futureoflife.org/static/data/documents/research_priorities.pdf

https://ccdcoe.org/publications/2011proceedings/ArtificialIntelligenceInCyberDefense-Tyugu.pdf

Learn Cybersecurity Data Science

Learn Cybersecurity Data Science

Build your skills using machine learning and other cutting-edge tools to perform various cybersecurity tasks.

http://globalchallenges.org/publications/globalrisks/about-the-project/

Pierluigi Paganini
Pierluigi Paganini

Pierluigi is member of the ENISA (European Union Agency for Network and Information Security) Threat Landscape Stakeholder Group, member of Cyber G7 Workgroup of the Italian Ministry of Foreign Affairs and International Cooperation, Professor and Director of the Master in Cyber Security at the Link Campus University. He is also a Security Evangelist, Security Analyst and Freelance Writer.

Editor-in-Chief at "Cyber Defense Magazine", Pierluigi is a cyber security expert with over 20 years experience in the field, he is Certified Ethical Hacker at EC Council in London. The passion for writing and a strong belief that security is founded on sharing and awareness led Pierluigi to find the security blog "Security Affairs" recently named a Top National Security Resource for US.

Pierluigi is a member of the "The Hacker News" team and he is a writer for some major publications in the field such as Cyber War Zone, ICTTF, Infosec Island, Infosec Institute, The Hacker News Magazine and for many other Security magazines.