The Major Security Issues Surrounding Virtual Personal Assistants
As it has been reviewed in past articles, the Smartphone has become a prime target for any just about any types and kinds of hacks.
One of the best ways to do this is by installing rogue mobile applications onto a wireless device, in such a covert way that even the end user will not be aware that it is has been installed. In fact, these kinds of mobile applications are created in such a way that they look like the real thing.
So, even the end user who keeps track of what is being installed onto their respective Smartphone device will even be spoofed in thinking that rogue mobile app is authentic and safe to use.
But, it is important to note that it is not just the Smartphone which is the cross hairs of the Cyber attacker, all of the associated applications that come with it are also being targeted. One such application is that of the Virtual Personal Assistant. This is a tool which allows you to make your everyday life more convenient by having a central place in which you can get your queries answered.
As it was illustrated in the last article, a typical example of this is Google Maps. Through the mobile app, you can either speak in or type into your Smartphone the destination that you want to go to.
Through the use of the Global Positioning System (GPS) technology, Google Maps can pinpoint the exact location that you are at, and from there, calculate the most optimal route to get to your destination.
The main engine which drives the Virtual Personal Assistant is that of Artificial Intelligence (also known as “AI”). Essentially, this is an extremely sophisticated software package which tries to mimic the human thought and decision-making process over a certain period.
Because of the incorporation of this technology the Virtual Personal Assistant (also known as the “VPA”) is now literally fast becoming our very own assistant, with the main intention of helping us in every aspect of our daily lives.
For instance, based on the mannerisms and behavioral traits that the VPA has learned about a particular end user, it can recommend and assist with anything from recommending which meal you should cook to what your next travel destination should be.
There has been an explosion into the marketplace of many types and kinds of Virtual Personal Assistants, but the most prevalent ones are:
- Siri -designed for the iPhone and the iOS Operating System;
- Google Now -designed for the Android Operating System, and any Smartphones which use it (such as Samsung)
- Cortana -designed for the Windows 10 Operating System and the Windows Mobile device.
But despite how much more convenient the Virtual Personal Assistant has become to use, just like any other piece of technology, it too is prone to its fair share of Security threats and vulnerabilities (as eluded to in the last article), which is the primary focal point of this article.
The Security Issues
The Privacy Rights/Recorded Conversations:
As it was discussed in our last article, one of the primary area of concerns of using a Virtual Personal Assistant is the conversations (or really the queries) that you are having with, for example, with Siri, is actually being recorded by Apple, and stored on their servers for an indefinite period of time (experts say that this is about 18 months, but it is not certain). In reality, nobody knows where these servers are located, adding even more to the mystery of how these recordings would be subsequently used. But, it is important to keep in mind that that these conversations which are being recorded are those in which we are actually engaged in. But what about those particular instances in which we have not engaged actively with our Smartphone, and we assume that our Virtual Private Assistant is not activated? Is there the distinct the possibility that it could be covertly listening into private conversations that we are having with others? The answer to this is a resounding “Yes.” Given the sheer level of the sophistication of the Neural Network and other Machine Learning tools which are embedded into them, it is very likely even that these private conversations are being picked up by the likes of Cortana or Siri, and is transmitted back to the servers so that they can be stored as well. It is the implications of this which is most fearful. For example, what if these private conversations are used for marketing purposes (such as creating targeted advertisements) or worst yet, being used by a third party for malicious purposes (an example of this will be discussed later).
Having conversations which turn into trails of evidence:
It should be kept in mind, that although the level of the sophistication of Virtual Personal Assistants is growing, the degree to which they are “learning” about the behavioral patterns of the end users that use them is still at a primitive level. For example, this applies to the actual context of the conversation which is being held, especially to the words that are being spoken. So, if an end user is attempting to ask Siri or Cortana a specific question, and they are unable to answer the query, there is a good chance that the VPA misunderstood what is being asked. Remember, the contextual meaning of words still cannot be understood by computers yet at this point time, so thus, an unprocessed message by Siri or Cortana could be misconstrued into an actual criminal threat at a subsequent point in time, and be used as evidence in a court of law.
The use of the Cloud:
This particular Security threat goes back once again to the issue of recorded conversations, but importantly, where they are stored. As mentioned, it is widely believed that they are stored at this point in the servers of the Vendors that make and support their brand of the Virtual Personal Assistant. However, the physical location of these servers where these conversations are stored is not known. Obviously, it would be, over a period, a cost factor for these companies to keep these conversations stored on physical servers. Therefore, storing them onto Virtual Servers based in the Cloud would be the next logical step. But, here is where the possible Security risks lie. For example, whether it is stored in the Amazon Web Services, or the Microsoft Azure, or the Apple iCloud, the “murkiness” of where these conversations are stored grow even more. For example, the end user will not have any type or kind of control as to how or when they want to delete their conversations with their VPA. It will be all up to the Internet Service Providers (ISPs) to provide the Security mechanisms to safeguard the Virtual Servers in which the conversations reside upon. Thus, there is no guarantee that these recordings won’t be hacked into, tampered with, or even accessed remotely by a malicious third party. It is also quite likely that a recorded conversation could very well be misunderstood and even misinterpreted by an outside entity, such as a law enforcement agency if the ISP grants them access. As a result, given how new Virtual Personal Assistants are into the marketplace, there is hardly any legal precedent which has been outlined to protect the end user under these particular circumstances. The Security risks with Virtual Personal Assistants will grow even more complex as it gets further intertwined into the Internet of Things (IoT) (this will be a topic covered in a future article).
Using your Virtual Personal Assistant to do your shopping for you:
As it has been described, the Virtual Personal Assistant is trying to be a part of our everyday lives, and in a way, even trying to be a “part of the family.” Thus, in this regard, the VPA that we use (whether it be Siri or Cortana), asks us many questions when it comes to what are personal preferences are, interests, hobbies, etc. The primary purpose of this is to help ensure that we are given the most “holistic” experiences as we are traveling, or even planning a social activity. For instance, if we are going from Point A to Point B, and if we have mentioned to Siri or Cortana where our favorite restaurant is, the VPA will make every effort to find such an establishment that is within proximity to our travels. This type of experience is now starting to be extended when we shop for products and services online. As it has been reviewed in previous articles, we now primarily use our Smartphone in which to engage in E-Commerce based activities. As a result of this, the Vendors of the Virtual Personal Assistants are now trying to make them actually do the shopping for us, at a predetermined point in time established by the end user. This simply means, gone are the days when we have to log onto Amazon.com to manually select the products we want-Siri or Cortana will do that for us. Although this will be of great convenience n doubt, there is also once again yet the flip side to this, namely the Security risks which can be involved. For example, in order to initiate an automated shopping routine with Siri or Cortana, we will have to give them our credit card information, bank account information, or other types of financial information, such as PayPal. These will obviously be stored into the VPA so that they make the online purchases, but the question now arises, to whom will our financial information be made available, and if so, will we receive notification of this? Or worst yet, will our financial information be stored on the servers of the Vendor (whether it be a physical or virtual based one) without our knowledge, in a way that is very similar to the conversations that we have with Siri or Cortana? What guarantees do we have that as the VPA completes the checkout process at the Online Store that our financial will be only received by the authorized merchant, and by a malicious third party? Or for that matter, how do we even know that the Siri or Cortana VPA that we are even using is authentic in the first place? What if it is a malicious software application that has been spoofed up to look like the real thing as it conducts our online shopping? Finally, if our financial information is indeed stored in the Cloud to be used for subsequent shopping trips by Siri or Cortana, what Security mechanisms will be put in place to safeguard that from any direct Cyber-attacks? Obviously, these are questions which have to be answered completely before society will even embrace the notion of having a VPA do our online shopping for us.
In summary, this article has examined some of the major Security vulnerabilities which are posed by the Virtual Personal Assistant. It should be noted that a bulk of these threats arise from asking the VPA normal queries. But, these can be considered more as “hidden” Security threats rather than direct ones. For example, with the case of the recorded conversations, any attack may not be realized until a much later in point in time.
In these cases, the recording may just reside either in a Physical or Virtual Server for an extended period until a Cyber attacker decides to attack it directly with other malicious intentions that he or she may have in mind (in this case, the recorded queries will receive just an indirect “bonus”).
In other words, it is still difficult at this point to quantify the level of these kinds of attacks against a Virtual Personal Assistant, as its usage is still so new to both individuals as well as businesses and corporations.
But, if a Virtual Personal Assistant is being used to automate the online shopping process for an end user, then the Security threats which are posed to it can be considered to be direct in nature. The primary reason for this is that any loss or damage incurred by the individual or business /corporation can be quantified regarding real financial numbers.
But apart from these losses, any Security breaches occurred in these situations can lead to even graver consequences, as any information or data stolen (such as credit card information hijacked from a Virtual Personal Assistant) here can even be used to launch Identity Theft attacks.
Our next article will continue to examine the Security threats to and from Virtual Personal Assistants, both from technical and non-technical standpoints.
We've encountered a new and totally unexpected error.
Get instant boot camp pricing
A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.