Machine learning and AI

Privacy Concerns About Emotional Chatbots

Daniel Dimov
February 17, 2018 by
Daniel Dimov

Introduction

Artificial Intelligence (AI) is evolving at an enormous speed and takes an increasingly more substantial part of our everyday life (think of the fully commercialized transactional robots Siri or Alexa as well intelligent agents in healthcare, automotive, and gaming industries).

Being a classical representation of high-level machine intelligence, humanized, fully responsive machines perceiving and responding to human emotions are not a futuristic fantasy anymore. The researchers working in the field of AI are developing self-learning algorithms that not only engage in practical problem solving but also serve in bonding with their users and assisting them in a more sophisticated psychological domain.

Learn Cybersecurity Data Science

Learn Cybersecurity Data Science

Build your skills using machine learning and other cutting-edge tools to perform various cybersecurity tasks.

One form of such a humanized technology is emotional chatbots, which allow the interactions between humans and machines to be carried more naturally, imitating conversation with friends, partners, therapists, or family members. Emotional chatbots focus not only on developing conversations with users and fulfilling their requests but also on identifying and responding to users' emotions, thus increasing the level of engagement between users and machines.

However, despite their efficiency and potential for commercial deployment, emotional chatbots may also pose numerous risks, including ethical issues, information security threats, and privacy concerns. In this article, we will only focus on privacy concerns raised by emotionally intelligent chatbots.

How do emotional chatbots work?

Emotional chatbots also referred to as "Emotional Chatting Machines" (ECM), are chatbots that employ a certain level of human emotional intelligence (i.e., the capability to perceive, integrate, understand, and regulate emotions). While providing factually appropriate and relevant responses to the request of users, emotional chatbots also are equipped with a function to perceive the emotions of users and express emotions, such as sadness, happiness, disgust, liking, and angriness, consistently.

To achieve a certain level of artificial empathy, Chinese researchers who have recently developed a form of ECM use three mechanisms, namely, (1) modeling the high-level abstraction of emotion expressions by embedding emotion categories, (2) capturing the change of implicit internal emotion states, and (3) using explicit emotion expressions with an external emotion vocabulary.

In simple words, to achieve recognition of emotions, such machines use self-learning algorithms that are trained by classifying emotions, detecting the relevant emotion in the content they process, and providing an answer reinforced with a relevant empathic statement. By way of illustration, the research demonstrates that, if a user addresses a chatbot with the statement "Worst day ever. I arrived late because of the traffic", the neutral chatbot will reply "You were late," whereas alternative emotionally intelligent chatbot will respond "Sometimes life just sucks!" (disgust), "I am always here to support you" (liking), "Keep smiling! Things will get better" (happy), or "It's depressing" (sad).

The study also demonstrates that more than half people (61%) who participate in testing of such algorithms prefer emotional versions of chatbots more than neutral chatbots. Based on such data, it can be concluded that perception of emotion and a relevant response may have a positive effect on deployment and acceptance of conversational agents.

Although researchers have not designed a machine capable of fully understanding human emotions yet, some attempts to introduce more emotionally sensitive technology proved to be a success. For example, Replika, an emotional chatbot app was downloaded more than 2 million times since its launch in November 2017. The chatbot is designed as a digital "friend that is always there for you" to keep one's memories. The therapeutic chatbot Woebot, another form of an emotional chatbot, is functioning as a replacement of a mental health therapist, which observes and analyses users' moods, helps to feel better, and provides psychological help based on cognitive behavioral therapy methods.

What should we be aware of?

To assess privacy risks related to the use of emotional chatbots, it is important to note that human's privacy is closely related (although not identical) to ensure the security of information. Therefore, the most significant issue of chatbots is protecting the information that the user submits during a conversation and ensuring that no third party can access, read, and exploit the information in any way.

The information submitted through a chat can not only be susceptible to financial or identification frauds but also may pose risks of a more sensitive nature that falls beyond collecting and selling user's personal data. Emotional engagement in the conversation and the level of empathy provided by the AI are the factors that may encourage one to reveal bigger volumes of private information, including health information, sexual orientation, and habitual circumstances.

In order to ensure confidentiality and integrity of communication, chatbot providers usually use the following security methods: (1) identity authentication using login credentials (username and password); (2) two-factor authentication (identity verification through more than one means); (3) encryption (encoding messages for the purposes of ensuring that they cannot be accessed and modified); and (4) self-destructing messages (information that contains sensitive data is destroyed after a certain period).

Any vulnerability in the security methods mentioned above may result in a leak of information, misuse of the AI system, or unauthorized access. The personal information obtained by third parties about users of chatbots can be used in a variety of unlawful activities, including blackmailing or making sensitive information publicly available.

What about the confidentiality of health information?

Since information transmitted through emotional chatbots may contain health-related information (e.g., the disclosure of suffered diseases and mental issues), it is important to note that such chatbots should comply with laws requiring the implementation of appropriate information privacy and security measures. In the U.S., such measures are required by the Health Insurance Portability and Accountability Act (HIPPA). HIPPA-compliant actors must implement "appropriate safeguards to
protect the privacy of personal health information" and limit the disclosure of such information without data subject's consent. Furthermore, HIPPA contains data breach notification requirements and obliges entities that process health data to maintain "appropriate administrative, physical and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information."

Learn Cybersecurity Data Science

Learn Cybersecurity Data Science

Build your skills using machine learning and other cutting-edge tools to perform various cybersecurity tasks.

Conclusion

Security is a property of the overall design of a system. However, not only developers of emotionally intelligent chatbots and commercial agents of technology should be interested in providing secured trustworthy systems that employ latest and most efficient security measures, but also users of such technology should guard their privacy. Before volunteering any data to the digital AI friend, it is worth remembering that chatbots are trained by asking more and more questions that may request providing sensitive information. Such information may be used for infringing upon social privacy. Thus, before using any emotional chatbots, it is worth checking whether the measures for storage, routing, and access to the conversational information are adequate. In addition, since a chatbot is not static software but a form of AI, the processes, and interactions taking place through it should comply with best medical and ethical standards.

References

  1. Armstrong, G., "Emotional Chatbots," Chatbot Magazine, 10 July 2017. Available at https://chatbotsmagazine.com/emotional-chatbots-9a25fd8aca85.
  2. Armstrong, K., "How Secure Are Chatbots?", Chatbots Magazine, 23 January 2017. Available at https://chatbotsmagazine.com/how-secure-are-chatbots-2a76f115618d.
  3. Replica. Available at https://replika.ai .
  4. Woebot. Available at https://woebot.io.
  5. Zhou, H. et al., "Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory," Cornell University Library, 14 September 2017. Available at https://arxiv.org/pdf/1704.01074.pdf.

    Co-Author

    Rasa Juzenaite works as a project manager at Dimov Internet Law Consulting (www.dimov.pro), a legal consultancy based in Belgium. She has a background in digital culture with a focus on digital humanities, social media, and digitization. Currently, she is pursuing an advanced Master's degree in IP & ICT Law.

Daniel Dimov
Daniel Dimov

Dr. Daniel Dimov is the founder of Dimov Internet Law Consulting (www.dimov.pro), a legal consultancy based in Belgium. Daniel is a fellow of the Internet Corporation for Assigned Names and Numbers (ICANN) and the Internet Society (ISOC). He did traineeships with the European Commission (Brussels), European Digital Rights (Brussels), and the Institute for EU and International law “T.M.C. Asser Institute” (The Hague). Daniel received a Ph.D. in law from the Center for Law in the Information Society at Leiden University, the Netherlands. He has a Master's Degree in European law (The Netherlands), a Master's Degree in Bulgarian Law (Bulgaria), and a certificate in Public International Law from The Hague Academy of International law.