My research interests focus on the complex interplay between social and emotional dynamics, algorithimic biases, and anti-democratic behavior on social media platforms. I am particularly interested in the role of emotions in the spread of misinformation, and how these dynamics can be leveraged to design interventions to mitigate the spread of harmful content.

In my research, I use a combination of computational methods, including machine learning and modeling, to analyze large-scale social media data. I am also interested in practical implications of using these methods, and how we can ensure that our research is conducted in a reproducible – or at least transparent – manner.

I am currently working on the following projects:

EMOMIS

The spreading of misinformation via social media contributes to a global threat to trust in science and democratic institutions, with consequences for public health and societal conflicts. Emotions influence how we process information, suggesting a link between certain emotional states and misinformation spreading – especially in times of high uncertainty. The project aims at understanding how emotions influence the tendency to believe and share inaccurate content, and to test intervention strategies to mitigate emotional misinformation spreading. Using digital data traces, the research team will analyze patterns of emotional misinformation spreading on social media and use experimental studies testing the potential of individual emotion regulation interventions to reduce misinformation sharing. Finally, we will integrate results from social media analysis and experimental studies in an agent-based model to identify the most promising interventions to reduce misinformation spreading in social networks, and to simulate how algorithmic filters for emotional information affect the spreading of misinformation.
Duration: 1 Dec 2021 - ongoing
Involved researchers: Hannah Metzler (PI), Annie Waldherr (Co-PI), David Garcia (Co-PI), Apeksha Shetty, Jula Lühring
External website: Project website
Funding: Vienna Science and Technology Fund (WWTF)

WHAT-IF

Our political information environment is rapidly evolving: the rise of disinformation and hate speech poses significant challenges to our democracies. WHAT-IF develops a digital twin of the digital political information environment to test the effects of interventions for policy making. Studying such interventions requires access to proprietary data and cooperation from platforms, but societies cannot serve as experimental playgrounds. This makes simulations an essential tool for exploring solutions ethically and effectively. Using methodologies such as Agent-Based Modeling and Large Language Models, the project tests the impact of policy and regulatory interventions on democratic citizenship. By collaborating with policymakers, citizens, and stakeholders, WHAT-IF delivers evidence-based insights to improve media regulation, enhance democratic discourse, and advance Computational Social Science methods.
Duration: 2025-2027
Involved researchers at the University of Vienna: Annie Waldherr (PI), Hajo Boomgarden (PI), Aytalina Kulichkina, Jula Lühring
External website: https://what-if-horizon.eu/
Funding: Horizon Europe