When: Thursday 22 November, 15.30-17.30 (followed by drinks)
This event is open for external attendees: To register, please send an email with your name and affiliation.

We cordially invite you to this lecture in the RPA Communication Lecture Series. This event will combine a special lecture by Dr. Huma Shah, Senior Lecturer and AI research scientist in the School of Computing, Electronics and Mathematics at Coventry University, and presentations by four RPA 2018 projects in the area of AI, Social Robots and Conversational Agents.
 

Keynote speaker: Dr. Huma Shah (Coventry University)

Dr. Shah has a Ph.D. in ‘Deception-detection and Machine Intelligence in Practical Turing Tests’ and designed the three Turing test experiments detailed in the book “Turing’s Imitation Game: Conversations with the Unknown”. She organised the 2006 and 2008 Loebner Prize for Artificial Intelligence and co-ordinated the Turing100 project at Bletchley Park in Alan Turing’s centenary year (2012) and the Turing2014 Turing Test experiment at the Royal Society, London in June 2014.
 

Lecture: Trust in AI, Social Robots and Conversational Agents

Does the fun of using conversational AI empower our trust in the convenience that personal digital assistants, such as Amazon’s Alexa, afford? Could conversational artificial intelligence be the next scandal in the digital economy, or could these technologies help to keep our identities safe, our personal data private and our trust maintained? Dr. Shah will address these and other important questions that arise as we develop more business and social AI agents, and robots with human language capability to interact with us in shopping malls, airports, hospitals, in our homes as information assistants, carers or companions, and as more businesses transform their processes ready for 5G to keep competitive in the fourth industrial revolution. More information
 

RPA 2018 Project Presentations: AI, Social Robots and Conversational Agents

A selection of the RPA Communication 2018 projects associated with this theme will provide a status update about their investigations, and in particular provide an overview of the methodological challenges and solutions that they have been developing to advance communication research in this area. The following projects will present in this session:

  • Conversational Agents in Public Health: Causes, Content, and Contingencies of Chatbot Usage for STD Prevention, by dr. Gert-Jan de Bruijn, dr. Catherine Bolman (Open University Netherlands), and Erwin Fisser (Soa Aids Netherlands)
     
  • Interactive, Computer-Simulated Virtual Patient-based eLearning to Train Clinicians in Communication Skills, by prof. Julia C. M. van Weert, prof. Ellen M. A. Smets (AMC-UvA), dr. Gert-Jan de Bruijn, and dr. ir. Willem-Paul Brinkman (TU Delft)
     
  • Going Beyond One-shot Experiments: Chatbots as Regular and Personalized Interaction Partners, by dr. Theo Araujo and dr. Nadine Bol
     
  • Does Social Presence Affect Answers to Sensitive Questions? Comparing Face-to-face, Telepresence Robot, and Skype-based Survey Modes, by dr. Alex Barco Martelo, dr. Rinaldo Kühne, and prof. Jochen Peter
     

 

Trust in AI, Social Robots and Conversational Agents

The demand for conversational Artificial Intelligence (CAI) is increasing to address customer service solutions [1]. Recent advances in speech recognition technology has accelerated the embedding of personal digital assistants across a range of use-cases in the digital economy, including artificial humans in e-retail, and virtual nurses for healthcare. Early Conversational Artificial Intelligence (CAI) was experienced through Joseph Weizenbaum’s natural language understanding system Eliza [2]. Eliza’s question-answer format, based on a psychotherapist, allowed humans to interact with the dialogue agent [3]. A similar format emerged in a text-based simulation of schizophrenia through Colby et al.’s 1971 PARRY system [4] helping to train psychiatrists [5].

E-commerce introduced the question-answer paradigm through text-based avatars augmenting online keyword search. One well known is Anna (Fig1) for Swedish furniture company Ikea’s website. A 24/7 accessible virtual agent, Anna produced a return on investment of 200%; increased sales from Ikea’s digital catalogue by 10%, and reduced Ikea’s call centre costs by 20% [6]. Since 2011 an increase in the use of digital assistants followed the launch of Apple’s Siri personal assistant on its iPhone6 smartphone.

Does the fun of using conversational AI empower our trust in the convenience that personal digital assistants, such as Amazon’s Alexa, afford? Could conversational artificial intelligence be the next scandal in the digital economy, or could these technologies help to keep our identities safe, our personal data private and our trust maintained? These are important questions as we develop more business and social AI agents, and robots with human language capability to interact with us in shopping malls, airports, hospitals, in our homes as information assistants, carers or companions, and as more businesses transform their processes ready for 5G to keep competitive in the fourth industrial revolution.

References
[1] Peart, A. 2018. Conversational AI is Becoming More Mainstream As Demand Increases. Datanami, August 21, 2018. Accessed from: https://www.datanami.com/2018/08/21/conversational-ai-is-becoming-more-mainstream-as-demand-increases/?platform=hootsuite
[2] Weizenbaum. J. (1966). ELIZA – A Computer Programme for the Study of Natural Language Communication between Men and Machines. Communications of the ACM, 9, pp 36-45
[3] Shah, H., Warwick, K., Vallverdú, J. and Wu, D. 2016. Can Machines Talk? Comparison of Eliza with Modern Dialogue Systems. Computers in Human Behavior, 58, pp. 278-295 10.1016/j.chb.2016.01.004
[4] Colby, K.M., Weber, S.,& Hilf, F.D. (1971). Artificial Paranoia. Artificial Intelligence Vol. 2, pp 1-25
[5] Heiser, J.F., Colby, K. M., Fraught, W.S. and Parkison, R.C. (1979). Can Psychiatrists Distinguish a Computer Simulation of Paranoia from the Real Thing?: The Limitation of Turing-like Tests as Measures of the Adequacy of Simulations. Journal of Psychiatric Research. Vol. 15(3): pp 149-16
[6] Shah, H. 2005. Text-based dialogical e-query systems: gimmick or convenience? Invited talk in Inaugural colloquium on conversational systems. Digital World Research Centre, University of Surrey: November 25