More Security Issues Surrounding Virtual Personal Assistants
Our last article reviewed some of the Security issues surrounding the use of the Virtual Personal Assistant. Some of the Security issues involve the following:
In this particular situation, the thinking is that the conversations we are having with Siri or Cortana is between “us and them.” Although there is no direct proof of yet, there is some fear that these conversations could be recorded by the vendors that develop their own brand of the VPA. There is the potential that other issues could come into the limelight because of this, especially where these conversations are specifically being stored, and who has access to it.
Use of the Virtual Personal Assistant for online shopping:
As it was discussed previously, one of the main goals of the VPA is to eventually automate our shopping processes. In this regard, we will have to enter our credit card information and/or banking information into Siri or Cortana. Although Microsoft and Apple are taking efforts to protect our credit card information, there is still fear that the information stored in Siri and Cortana could very well become a target of the Cyber attacker.
This article will continue to explore in more detail the other types and kinds of Security risks of the Virtual Personal Assistant.
The Other Security Issues
It is important to keep in mind that the Security issues surrounding Virtual Personal Assistants are both technical and non-technical in nature. But given how new the adoption rate of them are, one will find that most of the Security threats for right now are mostly non-technical in nature, meaning they have more of a social impact upon the end user or the business and/or corporation.
A good example of this is that the wireless vendors make their customers sign contracts to which very often, nobody pays too much attention when they are signing. The same is even with Apple when customers sign up for either their iPhone or iPad services. In this contract, Apple does state how it will store and even possibly use the recorded conversations:
“When you use Siri or Dictation, the things you say will be recorded and sent to Apple to convert what you say into text . . . by using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation, and other Apple products and services.” (SOURCE: 1)
Because of this sheer vagueness, many businesses and corporations are now viewing this as a huge Security risk to their own information and data, and as a result, have banned the use of Siri by their employees. One such example of this is IBM. Just recently, they implemented a ban on their employees from explicitly using Siri for anything which is work related. The primary reasons cited for this drastic move is not only once again the murkiness of the language above, but also the ramifications of having conversations stored literally in a black box, without having any knowledge of how it will be stored or disseminated: “The company worries that the spoken queries might be stored somewhere.” (SOURCE: 2)
The dangers of using the Siri or Cortana with the Internet of Things (IoT):
As this will be fully explored in another article, the Internet of Things is essentially a rather new concept in which we as individuals, will be defined as to how we interact objects (both in the physical and virtual sense). One of the primary applications of the Internet of Things will result in the “Smart Home.” One of the components of the IoT in this regard will once again be that of the Virtual Personal Assistant. Although the “Smart Home” has still yet to fully evolve into the mainstream public, it has already become the hot target for the Cyber attacker. For example, just recently, there was a Malware attack specifically known as the “Mirai.” In this, all components of the IoT, including Siri and Cortana, were targeted to create a massive Botnet to attack an entity known as “Dyn.” The end result of this attack is that it greatly slowed down the websites of PayPal, Twitter, Reddit and Netflix for a period of time, thus causing a lot of inconvenience to the end user.
The Risks of “VPA Unfiltering”:
As described in the last article, it is the hope of the major Virtual Personal Assistant vendors that their product will help automate our lifestyle on a daily basis, especially when it comes to online shopping. As the technology further evolves, Siri or Cortana will be able to pick the exact products we want, based on previous purchasing behavior. This concept is known as “VPA Filtering.” But, once retail companies start to fully realize just how powerful the Virtual Personal Assistant can be for pushing ads and other product solicitations, there is a strong fear that a “kickback” deal could be reached with the VPA vendors. So, for instance, the major retailers could offer a financial incentive to the makers of Siri and Cortana to do away with the mathematical algorithms which comprise the techniques of VPA Filtering and offer them a certain percentage of the revenue for their products which are advertised to customers on their Smartphone and purchased by the VPA. If this were to become a reality, this would then become known as “VPA Unfiltering.” Although this will be viewed as a major inconvenience and a sheer invasion of privacy, there are real Security risks associated with this as well. For example, without any filters put into place, the threats of Adware attacks become greatly magnified, and Siri or Cortana could very well make unwanted purchases with unauthorized online vendors. As a result of the latter, another real threat is that financial information and data could be easily and unknowingly given away to a malicious third party.
The remote control of Siri or Cortana:
The way the technology is now is that the major Virtual Personal Assistants cannot discriminate between voices of the end users. In other words, a person can easily talk into an iPhone or a Windows Mobile device that does not belong to them and still have an effective conversation with Siri or Cortana. Obviously, this is a huge Security risk as well, and the only way for them to “know” who the authorized end user is through using a Biometric technology known specifically as “Voice Recognition.” But, in conjunction with this, Siri or Cortana can literally have a conversation from either 10 feet or 10 inches away, there is no specific range in which a conversation can be had, as long as it is clearly audible. This too is also a grave Security risk, which in fact, has been proven to be so by Security researchers at ANSSI, a French-based Government agency. This simulated attack makes use of radio waves to covertly transmit voice generated commands to just about any brand of Smartphone that has Siri or Cortana installed onto it. In this instance, the earbuds can be literally used as an antenna, in which the electromagnetic waves can be easily converted over into electrical signals. The latter appears as discernable audio to the iOS, Android, or Windows 10 Mobile Operating Systems, coming straight from the microphone of the end user. The end result of this is that a Cyber attacker could very easily dictate commands to Siri or Cortana without uttering even one spoken word from a very far and remote distance. There are no limitations as to how many Smartphones can be infected in this manner, it can from as few as 5 to as many as even 100.
In summary, this article has continued upon the theme of examining the Security risks and threats which are not only posed by a Virtual Personal Assistant, but also those threats and risks posed to them as well. As it was noted earlier in the article, these vulnerabilities can either be from a technical or even a non-technical standpoint.
One will discover that at the present time, most of the threats arise from the non-technical point of view. The primary reason for this is that using a Virtual Personal Assistant on a daily basis is still a rather new concept to most individuals and even businesses/corporations.
Thus, the ramifications of any threats from the technical viewpoint are still yet difficult to ascertain, and it will take time for any meaningful data to come out of it. But regarding the former, the biggest issue is how the recorded conversations with Siri or Cortana are being stored, but more importantly, how they could be used in the future.
Probably one of the best examples of this is how it relates to the use in the judicial system. There have been some instances in which recorded conversations from a Virtual Personal Assistant has been used as a means for forensics based evidence, in court proceedings.
The question then often arises if these recordings can actually be used at all because 1) There was no previous knowledge that a VPA was actually recording these conversations, and 2) There is no legal precedence at the current time for this kind of evidence.
But apart from using Siri or Cortana just on your iPhone, Android, or Samsung wireless devices, it is expected that they will be blended in and literally become a part the objects in which we interact with on a daily basis. As it was reviewed before, this is known as the “Internet of Things.”
Our next article will examine certain techniques which an individual can use to further secure their Virtual Personal Assistant. Also, we will examine in more detail what the future trends will be for the VPA.