Artificial intelligence has brought forth a great technological revolution. From enhanced healthcare to self-driving cars, breakthroughs in Ai technology are gradually paving the way for us to rise to the heights of such technological innovation that humanity had only seen in science fiction movies a few decades ago. But just like the plot of all great sci-fi story, the hope for a brighter future is overshadowed by the looming threat of an opposing force.
Advances in deep learning technology have also ushered the emergence of deepfakes, such that are believed to be a new powerful form of disinformation (Dobber, Metoui, Trilling, Helberger, & de Vreese, 2020). Scholars worry that the rapidly improving and broadly accessible deepfake technology may empower malicious (political) actors to discredit their opponents and further their own agenda, which may challenge the legitimacy of democratic institutions (Bennett & Livingston, 2018), and depress the quality of public debate (Vaccari & Chadwick, 2020). Despite these concerns, only two studies to date explored the credibility (Vaccari & Chadwick, 2020) and attitudinal impact of political deepfakes (Dobber et al., 2020).
Besides the threat of (deepfake) disinformation, another, comparably worrisome “detrimental force in democracy” (Nai & Maier, 2020, p.1) that dominates contemporary (US) politics is negative campaigning (i.e., also referred to as attack politics). However, the literature regarding the effects of negative campaigning on evaluations of politicians is highly inconsistent thus far (Lau, Sigelman, & Rovner, 2007). Nonetheless, recent findings indicated that this inconsistency may be explained by the fact that the impact of some forms of attack politics may hinge on voters’ tolerance for negativity with regards to sponsor evaluations (Nai & Maier, 2020) or trait Schadenfreude with regards to target evaluations (Nai & Otto, 2020).
Against this backdrop, I wondered what if attack politics conveyed via deepfakes represents a new, powerful, but subtle form of disinformation that could be used to depress evaluations of the featured politician among segments of the population who are more sensitive towards negativity. Attempting to hit two birds with one stone, I situated my study at the crossroads of disinformation and attack politics to clarify the literature on negative campaigning and further knowledge on the effect of deepfakes on US citizens.
The first aim of the study was to uncover to what extent respondents believed the event conveyed by the deepfake took place in real life (SQ1). Second, I expected that exposure to the deepfake uncivil character attack (vs. real footage civil policy attack) results in a backlash on the attack sponsor (H1). Third, I expected no differences between the two conditions on target evaluations (H2). Fourth, I expected that respondents who score low on tolerance for negativity have a more negative evaluation of the sponsor when exposed to the deepfake (H3). Finally, I expected that respondents who score high on trait Schadenfreude evaluate the target more negatively, whereas the target is more positively evaluated by individuals who score low on trait Schadenfreude (H4).
To unpack these hypotheses, we (i.e., together with Dr. Nadia Metoui) produced a subtitled deepfake GIF using a pre-trained facial landmark detection model. The deepfake featured a harsh character attack ostensibly sponsored by Joe Biden against Donald Trump (Figure 1.) and I compared it to real footage (GIF) of Biden ostensibly sponsoring a civil policy attack against Trump (Figure 2.) via a survey embedded experiment conducted on MTurk among US citizens (N = 271) in early December, after the 2020 US Presidential Election took place. The majority of respondents reported to be (leaning) Democrat, and deepfake aware (i.e., 77.5% deepfake aware).
Results revealed no significant differences between the two conditions with regards to sponsor and target evaluations. Likewise, neither Tolerance for negativity nor trait Schadenfreude moderated the direct effect of experimental condition on sponsor and target evaluations. Participants most likely disregarded the stimuli when evaluating the politicians either due to motivated reasoning or because they did not perceive the deepfake realistic. Answers to an open question indicated that the perceived realism of deepfakes may depend on more than their quality, albeit quality is crucial. Besides quality issues, the congruence between the perceived personality of the feature politician and the attack message, as well as political knowledge came up as reasons for not perceiving the deepfake realistic. Moreover, the highly deepfake aware respondents of the current study may have been more alert to deception compared to respondents of a previous study who were swayed by the deepfake but reported to be unaware of deepfakes (Dobber et al., 2020).
The current findings do not indicate that a deepfake GIF of a high profile politician can sway a highly deepfake aware segment of the US population, but we cannot rule out the possibility that deepfake negativity may be deeply dangerous. The good news is that being familiar with deepfakes may foster sensitivity towards deception, albeit my thesis does not offer quantitative evidence to support this notion. Future research should assess the role of deepfake awareness and political knowledge on sensitivity towards (deepfake) deception. Other avenues for future research include the comparison of lower quality deepfakes with higher quality deepfakes to identify the threshold where deepfakes become realistic enough that they may become attitude altering. Likewise, comparing different attack message sponsors conveying attack messages of various levels of (in)civility would bring us closer to understanding the role of the perceived prominence of the featured politician, political knowledge, and different tastes of negativity.
Special thanks to Dr. Nadia Metoui!
Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139.
Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2020). Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? The International Journal of Press/Politics, 1940161220944364. https://doi.org/10.1177/1940161220944364
Lau, R. R., Sigelman, L., & Brown Rovner, I. (2007). The Effects of Negative Political Campaigns: A Meta-Analytic Reassessment. The Journal of Politics, 4(69). https://www-jstor-org.proxy.uba.uva.nl:2443/stable/10.1111/j.1468-2508.2007.00618.x?seq=1#metadata_info_tab_contents
Nai, A., & Maier, J. (2020). Is Negative Campaigning a Matter of Taste? Political Attacks, Incivility, and the Moderating Role of Individual Differences. American Politics Research, 1532673X20965548. https://doi.org/10.1177/1532673X20965548
Nai, A., & Otto, L. P. (2020). When they go low, we gloat: How trait and state Schadenfreude moderate the perception and effect of negative political messages. Journal of Media Psychology: Theories, Methods, and Applications.
Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408
The Digital Communication Methods Lab is an initiative of the Research Priority Area Commmunication, at the University of Amsterdam.