Understanding users’ responses to disclosed vs. undisclosed customer service chatbots

by Nathalie Koubayová

@koubayova 

Thesis supervisor: Dr. Margot van der Goot 

 

With the recent rising of chatbots in e-commerce, conversational agents entered our lives, without many people noticing. However, because of huge advancements in the humanisation of AI, it is now difficult for people to assess whether it is a chatbot or a human that they communicate with through a text-based interface. That brings ethical and privacy issues, as it is not obligatory for companies to disclose the identity of their chatbots. Because disclosure at the start of the conversation negatively impacts purchase figures due to consumers’ perception of conversational agents being less knowledgeable and emphatic as opposed to human agents (Luo, Tong, Fang & Qu, 2019), many businesses rather keep chatbots’ identity a secret. As there is a likelihood that the disclosure of a chatbot’s identity will be enacted by law in the European Union, inspired by the California Consumer Protection Act (California Consumer Privacy Act (CCPA), 2021), my thesis aimed to reach a better understanding of the impacts of chatbot’s disclosure, using a mixed methods approach. 

 

Methods

The study implemented a sequential mixed methods design, with semi-structured qualitative interviews being conducted in the first phase. The interviews aimed to get a deeper understanding of users’ responses to the disclosed vs. undisclosed chatbot, but also to pre-test the measurements and stimulus. The second part of the study consisted of a single factor (disclosure vs. no disclosure) between-subject online experiment.

Two versions of customer care chatbots (disclosed vs. undisclosed), using human-like linguistic cues (i.e., language style), and identity cue (i.e., the human name ‘Sara’) were created using Conversational Agent Research Toolkit (CART) for the purpose of the study (Araujo, 2020). In the disclosed condition, the chatbot introduced itself as: “Hi there. My name is Sara, a chatbot from Yummy to Eat”, whereas the undisclosed said: “Hi there. My name is Sara from Yummy to Eat” (Figure 1). The word ‘chatbot’, signalling the disclosure was chosen as previous research (De Cicco, Lima da Costa e Silva & Palumbo, 2020; Luo et al., 2019; Mozafari et al. 2020), and also existing companies, e.g., Zalando, Easy Jet, Lidl, use it to inform the customers that they interact with an artificial customer care agent. A low-involvement product that would elicit peripheral route processing (Elaboration Likelihood Model, Cacioppo, Petty, Chuan & Rodriguez, 1986) was picked for the fictitious scenario. As this study focused on customer care chatbots, a script about a food order made through a fictitious delivery company called Yummy to Eat was created. This was done to avoid confusion and priming effect. Respondents were asked a set of questions about their fictitious order, based on a scenario they received beforehand, and were free in typing their responses.

In the qualitative part of the study, the interviewees (N = 8) interacted with both versions of the chatbot, whereas in the quantitative part, the respondents (N = 194) were randomly assigned to either undisclosed or disclosed condition. After the interaction, their perception of anthropomorphism, which is “the assignment of human traits and characteristics to computers” (Nass & Moon, 2000), social presence, “a psychological state in which virtual social actors are experienced as actual social actors in either sensory or non-sensory ways” (Lee, Jung, Kim & Kim, 2006), and source orientation, meaning “who or what people think they are interacting with” (Guzman, 2019) was measured.

 

Figure 1 – Chatbot’s introduction; undisclosed (top), disclosed (bottom)

 

 

 

 

 

 

 

 

 

 

 

Results

Contrary to the expectations, the differences in users’ perceptions of disclosed vs. undisclosed chatbots were subtle and not statistically significant. The results suggest that people pay attention to other human-like cues of the chatbot rather than the word ‘chatbot’. Furthermore, the findings indicate that the key factor that guided participants’ overall experience was the friendly tone of the agent and the extent to which it helped them, as, at the end of the interaction, the chatbot automatically offered them a refund for the fictitious order, which all participants appreciated. Interestingly, the non-human embodiment of the agent was not of great importance. As one of the interviewees aptly stated: “As long as they give you a solution for your issue. I think it’s not that important (i.e., that the agent is not a human). So, I mean, she’s still helped me, even though it’s a machine.” These insights highlight the importance of implementing disclosure into the ethical design of contemporary chatbots, as revealing the machine’s identity does not necessarily undermine users’ perceptions of the agent.

Therefore, businesses using chatbots should not shy of being transparent about the technology they employ and comply with the laws. Although not all participants of this study noticed the inclusion of the word ‘chatbot’, informing the users about the machine status of the agent is a big step forward. Future research might test various types of disclosures to investigate whether people react differently to alternative formulations of disclosures. Moreover, it would be interesting to explore the effects of getting a refund vs. no refund on the perceptions of the customer care chatbots, because the suggested monetary refund seemed to be the main determinant of the success, which may have impacted the extent to which respondents perceived the chatbot in terms of agreeableness.

 

References

Araujo, T. (2020). Conversational agent research toolkit: An alternative for creating and managing chatbots for experimental research.

Cacioppo, J. T., Petty, R. E., Chuan, F. K., & Rodriguez, R. (1986). Central and peripheral routes to persuasion: An individual difference perspective. Journal of Personality and Social Psychology, 51(5), 1032–1043. https://doi.org/10.1037/0022-3514.51.5.1032

California Consumer Privacy Act (CCPA). (2021, June 15). State of California – Department of Justice. Office of the Attorney General. Retrieved from https://oag.ca.gov/privacy/ccpa

De Cicco, R., da Costa e Silva, S. C. L., & Palumbo, R. (2021). Should a Chatbot Disclose Itself? Implications for an Online Conversational Retailer. In Chatbot Research and Design (Vol. 12604, pp. 3–15). Springer International Publishing. https://doi.org/10.1007/978-3-030-68288-0_1

Guzman, A. L. (2019). Voices in and of the machine: Source orientation toward mobile virtual assistants. Computers in Human Behavior, 90, 343-350. https://doi.org/10.1016/j.chb.2018.08.009

Lee, K. M., Jung, Y., Kim, J., & Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human-Computer Studies, 64(10), 962-973. https://doi.org/10.1016/j.ijhcs.2006.05.002

Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science (Providence, R.I.), 38(6), 937–947. https://doi.org/10.1287/mksc.2019.1192

Mozafari, N., Weiger, W., Hammerschmidt, M. (2020). The chatbot disclosure dilemma: Desirable and undesirable effects of disclosing the non-human identity of chatbots. In Proceedings of the International Conference on Information Systems (September 2020).

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. 

Journal of Social Issues, 56(1), 81-103.