The Research Priority Area Communication and its Digital Communication Methods Lab are happy to announce that four proposals have been selected for the fourth edition of the digicomlab Thesis Funding Grants. These grants provide financial support for theoretically-relevant and digitally innovative (research) master theses written at the University of Amsterdam’s Graduate School of Communication for semester 1 of the 2020-2021 academic year.
Deepfake Negativity: Political attacks, disinformation and the moderating role of personality traits on evaluations of politicians
Monika Simon @si_moni_ka
Deepfakes are artificial intelligence (AI) enabled doctored videos that strongly resemble real videos. While scholars worry that deepfakes represent a new powerful form of disinformation (Dobber et al., 2020), which may enable malicious actors to poison the public debate and interfere with democratic elections (Vaccari & Chadwick, 2020), only two studies to date have explored the credibility and attitudinal impact of political deepfakes. To fill this gap in literature, the present study employed a self-produced deepfake of a politician which was created by a pre-trained deep learning model that utilized 2D expression swapping to modify the lip shape of the politician frame by frame.
Synthesizing literature on online disinformation, negativity bias, and negative campaigning, the study will uncover how are voters’ political attitudes affected by exposure to a deepfake featuring attack politics, which has become a prominent characteristic of contemporary US politics. Using an online experiment among US citizens, the credibility and attitudinal impact of a self-produced deepfake featuring an uncivil character attack sponsored by a Democrat targeting a Republican will be compared to real footage featuring a policy attack of the same sponsor and target. As initial evidence shows that effectiveness of negative campaigning “may be a matter of taste” (Nai & Maier, 2020), the present study uncovers the interaction between negativity and personality traits which may contribute to clarifying the so far inconsistent effects of negative campaigning. In sum, the present study will advance knowledge on the attitudinal impact of negative campaigning and disinformation by employing an AI-powered self-produced political deepfake.
Acquiring Political Knowledge through Meme Exposure on Facebook: An Eye-tracking Experiment
Julia Dalibor
Political knowledge is a central concept in understanding the mechanisms that promote political participation. Citizens with high levels of political knowledge engage in behavior that contributes to the well-functioning of democracies: They hold more stable opinions and are more likely to translate them into consistent voting behavior (Kleinberg & Lau, 2019). While prior research has investigated learning processes in the context of digital media platforms in general (e.g., Bode, 2016; Boukes, 2019), the effects of specific digital information types on political learning are still understudied.
One information type which might be particularly promising in capturing the audience’s attention, and thereby easing political learning processes, are memes. Memes are a unique mix of visual, textual, and humoristic elements, which are prevalent in most peoples’ newsfeeds. Not only are memes humoristic, they are also user-generated; two factors that in the past have been associated with high levels of generated attention among audiences (Boukes, 2019; Ye et al., 2011). But does the sum of these content characteristics indeed result in higher attention levels compared to other visual information types? And does this, in turn, lead to higher levels of political knowledge?
This study attempts to answer these questions by conducting an eye-tracking experiment. The technology provides an important advantage over self-assessed attention measures as it allows for an ecologically more valid measure of visual attention. In an attempt to widen our understanding of political learning processes, this study aims to investigate how new forms of political information presentation shape knowledge in the citizenry.
Viral Violence: How police brutality and protest violence can influence individual’s affective state, support for political outcomes, and social media behavior.
Neil Fasching @neilfasching
People have a “negativity bias” when it comes to consuming news content, with individuals putting more weight and attention on negative information (Trussler & Soroka, 2014). Past research found negative news produced much stronger psychophysiological responses than positive news (Soroka & McAdams, 2015). Recently, a form of negative news is spreading not only across main-stream media, but also through social media and interpersonal contact: viral videos and images of police brutality and protest violence.
This study will help understand the individual level explanations why news – and in particular violence – spreads through social media, as well as investigate whether police and protest violence evokes greater outrage than non-politically-charged violence. It will also probe if ideology is crowding out the effect of violence, as it seems likely that outrage is reduced when violence is against their out-group relative to their in-group. Finally, this project will probe the effect of this violence on the political outcomes, such support for the police, as well as social outcomes, such as willingness to share social media and desire to view similar content.
To investigate these outcomes, a large survey experiment will be run. Custom photos and vignettes will be created, with police brutality and protest violence depicted in the experimental conditions and non-political physical altercations or verbal altercations depicted in the control condition. The photos and vignettes will also manipulate group identity, indicating the violence is against a group they support (in-group condition) or against a group they do not support (out-group condition).
Designing virtual reality experiences as a persuasive tool to promote pro-environmental behavior: The longitudinal effects of prompts added to VR experience
Hana Hegyiova
Climate change is one of the most pressing environmental crises that our society is facing. With rather ‘prolonged’ and statistical nature (Weber, 2006), its intangible consequences often tend to feel distant for people and not solvable by an individual him/herself (Ahn, Bailenson, & Park, 2014). Recently, the potential of using virtual reality (VR) to perceive distinct environmental problems has become widespread (Ahn, Bailenson, & Park, 2014) with attempts to provoke pro-environmental behavior (Bailey et al. 2015). However, most of the studies focused mainly on vivid visualizations (Markowitz, Laha, Perone, Pea, & Bailenson, 2018), embodiment (Bailey et al. 2015), or stimulating hard to experience situations (Chittaro & Zangrando, 2010), expecting attitudinal or behavioral change.
Such experiences are undoubtedly memorable but could lack information on concrete pro-environmental actions to take. This study will look at VR technology’s persuasive potential by employing persuasive technology (PT) principles through the behavior model for persuasive design (Fogg, 2003; Fogg, 2009). The author will examine how being confronted with a prompt embedded in the VR experience itself could make such VR technology more effective compared to a scenario without a prompt.
Moreover, most of the existing research is focused on the participant’s self-reported attitudes measured right after the experience. However, there is a possibility that these short-term measures reflect a sort of a ‘wow-effect’ that most people tend to experience after experiencing VR. At the same time, these initial effects may quickly wear off in time. Whether the change of attitudes or behavioral intentions has long-lasting effects will be examined by an experimental design focusing on longitudinal data with two measurements: firstly, after the experiences, and then, one month after the study.