Welcome! I am a Post-doctoral Associate in the Center for Social Media and Politics at New York University. I received my PhD in Political Science from the University of Washington (UW). I am also a Partner Research Fellow at the Siegel Family Endowment and a Fellow at the Political Economy Forum at UW. My research broadly focuses on the effect of technological change and supranational institutions on threats to liberal democracy (specifically fake news/misinformation, populism, and patronage). In addition, I have developed new methods in the field of political communication that remove obstacles to extracting information from enormous collections of electronic text and images that users encounter online. I have published applications of machine learning that unlock big and complex data, so that I and others can effectively research the effect of social media on political preferences and opinions. You can track my code and find replication files on Github. I have published articles in peer-reviewed journals such as Policy Studies Journal and Europe-Asia Studies and in popular outlets such as the Washington Post. I also serve on the Editorial Board of the Journal of Online Trust and Safety run by the Stanford Internet Observatory.


Contact




Publications And Working Papers In Three Research Areas


Threats to Democracy

New Methods

European Union



Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data
(with Andrew Guess, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Published at Journal of Quantitative Description: Digital Media
Article

Framing Parkland: A Social Media Approach To Studying Issue Frames And Their Impact
(with Andreu Casas, Nora Webb Williams, John Wilkerson, and Wesley Zuidema)
Published at Policy-Studies Journal
Article

Principal Agent Problems Within EU Funds: A Case Study of Patronage in Hungary.
(with Beatrice Magistro)
Forthcoming at Europe-Asia Studies
Article


Abstract
In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. Using a combination of curated lists and supervised classifiers we focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and political news. Our results support recent findings that more fake news is shared by older users and there is a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines. Across the platform, however, articles from credible news sources are shared over 5 times more often and viewed over 7 times more often than articles from low-credibility sources.

Abstract
Agenda setting and issue framing research investigates how frames impact public attention, policy decisions, and political outcomes. Social media sites, such as Twitter, provide opportunities to study framing dynamics in an important area of political discourse. We present a method for identifying frames in tweets and measuring their effectiveness. We use topic modeling combined with manual validation to identify recurrent problem frames and topics in thousands of tweets by gun rights and gun control groups following the Marjory Stoneman Douglas High School in Parkland, Florida, shooting. We find that each side used Twitter to advance competing policy narratives. Gun rights groups' narratives implied that more gun restrictions were not the solution. Their most effective frame focused on officials’ failures to enforce existing laws. In contrast, gun control groups portrayed easy access to guns as the problem and emphasized the importance of mobilizing politically.

Abstract
European Union Funds have been linked to a high incidence of patronage/corruption despite substantial administrative and regulatory requirements and extensive domestic monitoring. We posit that this divergence in actual outcomes and preferred policies can be attributed to the co-optation of the auditing and monitoring processes by member state governments. We outline the importance of the auditing process and flow of information to the European Commission using a delegation model and then test our prediction when the process is co-opted in Hungary. Using grant and contract award data from European Union Structural Funds we find that the co-optation of the auditing process resulted in a significant rise in indicators of patronage/corruption in Hungary. Using these findings we evaluate how current EU proposals will impact the outlined issues inherent within EU Funds and how the case of Hungary can be generalized to other member states.


News credibility labels have limited but uneven effects on news diet quality and fail to reduce misperceptions
(with Andrew Guess, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Working Paper

An Externally Valid Method for Assessing Belief in Popular Fake News
(with William Godel, Zeve Sanderson, Nate Persily, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Working Paper

Breaking Up Is Hard To Do: Why The Eurozone Will Survive.
(with James Caporaso)
Published at Economies
Article

Abstract
As the primary arena for viral misinformation shifts toward transnational threats such as the Covid-19 pandemic, the search continues for scalable, lasting countermeasures compatible with principles of transparency and free expression. To advance scientific understanding and inform future interventions, we conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users' social feeds and search results pages. By combining representative surveys (N = 3,337) and digital trace data (N = 946) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after three weeks, and we can rule out even small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and Covid-19. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation in our sample. We discuss the implications of our findings for our understanding of the determinants of news diets and for practical questions about designing interventions to counteract online misinformation.

Abstract
Current research measuring the level of belief in fake news and the types of people who are more likely to believe it has done so by asking respondents to evaluate out-of-date headlines and ledes that they have chosen themselves. This method strays from how we measure and observe fake news consumption in the wild and could bias our understanding of belief in misinformation. To test if integrating advances in research on the consumption of fake news into survey instruments measuring belief in fake news changes our understanding of belief in fake news, we fielded three studies in which we repeatedly asked representative samples of Americans to evaluate popular full articles from non-credible and credible sources chosen by a pre-registered algorithm within 24-48 hours of their publication. By sourcing popular fake news articles without researcher selection and asking respondents to evaluate the full articles in the period news consumers are exposed to them, we find that, on average, false or misleading articles are rated as true 33.2% of the time; moreover, approximately 90% of individuals coded at least one false or misleading article as true when given a set of four false or misleading articles. Strikingly, these results are much higher than statistics reported in previous studies.

Abstract
Since revelations of the Greek fiscal deficit in the fall of 2009, the breakup of the Economic and Monetary Union (EMU) has moved from unthinkable to plausible. The debate over the future of the EMU has become increasingly relevant, as numerous efforts to solve the Greek crisis have not been successful. Neither have basic competitiveness differences between countries in the core and periphery of the European Union been eliminated. Proposed solutions include development of a banking union, regulatory measures to monitor trade and capital imbalances, fiscal reforms on the part of countries in trouble, and centralized fiscal capacity on the part of the EMU itself to offset the liabilities of the indebted states. While the crisis seems to be contained, it is by no means solved. This leads to the question: “Will the euro survive?” We answer this question in the affirmative, but in doing so we argue that continuation of the EMU is different from the question of whether the EMU should have been created in the first place. Some reasons for continuation of the EMU were present at its creation; others have developed in a path-dependent way as the Eurozone has evolved. We illustrate these features of a currency zone and their implications for the future of the EMU.


Do Your Own Research? Searching for Additional Information Online About Misinformation Increases Belief in Misinformation
(with William Godel, Zeve Sanderson, Nate Persily, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Working Paper


Automated Visual Clustering: A Technique for Image Corpus Exploration and Cost Reduction.
(with Andreu Casas, Nora Webb Williams, and John Wilkerson)
Working Paper

The Question of German Imbalances within the Eurozone: Competitiveness versus Savings Explanations?
(with James Caporaso)
Published at International Trade, Politics, and Development
Article

Abstract
In an effort to reduce the spread of and belief in fake news, social media companies and civil society organizations have encouraged online news consumers to research the news they believe may be suspect online. This suggestion is quite prevalent, but we know little about its effectiveness. We test this intervention and surprisingly find that encouraging individuals to search for information to inform one’s evaluation of a false article's veracity increases the likelihood that an individual believes it. Supplementary evidence from web-tracking data and Google search results suggests that news consumers encounter news from low-quality sources when they research false articles and this exposure increases belief in false articles.

Abstract
Scholars are increasingly using large-N image analysis to investigate contemporary political attitudes and behavior. We address three emerging needs of image scholarship. First, researchers may want to visually explore an image corpus to discern patterns before they begin assigning labels. Second, they may want to annotate images for the presence of complex theoretical mechanisms that cannot be easily assigned using existing automated methods. Third, they may be primarily interested in studying human annotation decisions. We demonstrate how unsupervised image clustering can help researchers address each of these needs when dealing with large, unbalanced image corpora. We illustrate this using a large corpus of images shared on Twitter.

Abstract
Two major competing explanations for trade imbalances exist in the literature: competitiveness and savings. Although often framed as competing explanations, this manuscript uses trade and national economic data to demonstrate that both partially explain trade imbalances within the Eurozone prior to the crisis and the trade re-balancing post-crisis. The complementary nature of the two expla- nations is mainly due to political and institutional factors that separate the European North from the South. These differences, described in the varieties of capitalism literature, help to explain varia- tion in domestic savings rates and cross-country competitiveness as well as the ultimate underlying causes of trade imbalances across the globe.


Testing The Effect of Information on Discerning the Veracity of News in Real-Time
(with William Godel, Zeve Sanderson, Nate Persily, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Working Paper

Learning Media Quality from Facebook Data.
(with Tom Paskhalis, Cody Buntain, Zhanna Tereschenko, Jonathan Nagler, Richard Bonneau, and Joshua Tucker)
Working Paper

Abstract
Despite broad interest in curbing belief in fake news online, relatively little is known about the marginal effect of providing individuals with information about an article on their ability to correctly discern the veracity of news in real-time. To this end, we used a series of pre-registered experiments in two separate studies to test the marginal effect of three pieces of information about an article that have been the subject of broad scientific and popular interest: external information, source information, and information in the text of the article. This produced three important findings. First, source information increases belief in news articles from mainstream sources, but decreases belief in news articles from low-quality sources. This, for the most part, holds when both full articles and headlines/ledes are being evaluated. Second, we find that access to the full article, rather than just the headline/lede, improves the ability of an individual to correctly discern the veracity of news. Finally, external information (in our case, online research through a search engine) increases belief in both true and false/misleading news articles. Worryingly, the effect on false/misleading news is of a similar magnitude to the effect for true news.

Abstract
The production, consumption, and dissemination of online news is of growing interest among scholars studying democracy, but much difficulty lies in the study of media quality in a comparative perspective. Many problems plague cross-country studies, but studies of media credibility are particularly susceptible to issues such as varying political environments, language barriers, cultural contexts, and differing media regulation. This study leverages an original dataset published by Facebook through the Social Science One initiative to study the prevalence of unreliable online news in 27 countries in Europe. We use a supervised model (trained on US data) to predict the credibility of a given news domain based on users’ feedback and behavior. We show that interactions with links to news websites on social media allow us to predict the credibility of news and a model that learns such relationships is portable across national contexts. Using this model we find an East-West divide between countries in Europe with higher proportion of unreliable news in former socialist countries, as well as in the UK. Furthermore, we find that more recently registered news domains and those registered outside of the country are more likely to be predicted to be the sources of unreliable information.


Moderating with the Mob: Evaluating the Efficacy of Real Time Crowdsourced Fact Checking
(with William Godel, Zeve Sanderson, Nate Persily, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Working Paper

When Republicans See Red but Liberals Feel Blue: Why Labeler Characteristics Matter for Image Analysis
(with Andreu Casas, Nora Webb Williams, and John Wilkerson)
Working Paper

Abstract
Reducing the spread of false and misleading news remains a challenge for social media platforms, as the current strategy of using third-party fact-checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Recent research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of groups of ordinary users to assess the veracity of online information. Using a pre-registered research design, we investigate the effectiveness of crowdsourced fact checking in real time. We select popular news stories in real time and have them evaluated by both ordinary individuals and professional fact checkers within 72 hours of publication. Our data consists of 21,531 individual evaluations across 135 articles published between November 2019 and June 2020. Although we find that machine learning based models (that use the crowd as input) perform significantly better than simple aggregation rules at identifying false news, our results suggest that neither approach can perform at the level of professional fact checkers.

Abstract
Image analysis studies, big or small, rely on human annotators. Human-generated labels are treated as ground truth for analysis and for the training of machine-learning algorithms. Annotators may label for image content (e.g. whether an image includes a protest) or for reactions to images (e.g. whether it evokes enthusiasm or sadness). In this research note we explore whether partisan identities impact how annotators see or react to the same images and if so, how this affects results reported from common analyses. Using nearly 7,500 images from left-leaning social movements on Twitter we find that partisans can disagree about what an image depicts and that they report very different emotional reactions to the same images. We also find significant differences in content and reactions between male and female annotators. Finally, we demonstrate why these systematic labeling differences matter by estimating the effects of content and reactions on retweets. We find that results reported from these models vary based on whose labels are included in the analysis.


How Language Shapes Belief in Misinformation: A Study Among Multilingual Speakers in Ukraine
(with Aaron Erlich, Jonathan Nagler, and Joshua Tucker)
Working Paper

An Automatic Framework to Continuously Monitor Multi-Platform Information Spread
(with Zhouhan Chen, Jen Rosiere Reynolds, Juliana Freire, Joshua Tucker, Jonathan Nagler, and Richard Bonneau)
Published at the Proceedings of the Workshop on Misinformation Integrity in Social Networks 2021 co-located with the 30th Web Conference (TheWebConf 2021)
Article
Research Tool


Abstract
Our cumulative knowledge about belief in misinformation and true news predominantly comes from surveying Americans about news (true and false) written in English from American media sources. However, the global media environment is complexly multilingual. Half of the global population uses two or more languages or dialects in their daily life and, therefore, likely consumes media, including misinformation, in multiple languages. As more multilingual speakers in a single media market consume news in different languages from different sources, it is imperative we develop a more comprehensive understanding of how news consumers perceive news in different languages. We test a new theory about the proficiency and source effect of language by randomly assigning respondents to evaluate false and true news stories in either their dominant or less proficient language (Russian or Ukrainian) in Ukraine in the period of time in which individuals are most likely to consume this news (directly after the publication of an article). Using this survey instrument we answer two main research questions: (1) In areas in which most news is reported in two languages, are multilingual individuals less skeptical of misinformation produced in their less proficient language? (2) Does this proficiency effect disappear or even reverse if one's less-proficient language is associated with a foreign disinformation campaign?

Abstract
Identifying and tracking the proliferation of misinformation, or fake news, poses unique challenges to academic researchers and online social networking platforms. Fake news increasingly traverses multiple platforms, posted on one platform and then re-shared on another, making it difficult to manually track the spread of individual messages. Also, the prevalence of fake news cannot be measured by a single indicator, but requires an ensemble of metrics that quantify information spread along multiple dimensions. To address these issues, we propose a framework called Information Tracer, that can (1) track the spread of news URLs over multiple platforms, (2) generate customizable metrics, and (3) enable investigators to compare, calibrate, and identify possible fake news stories. We implement a system that tracks URLs over Twitter, Facebook and Reddit and operationalize three impact indicators–Total Interaction, Breakout Scale and Coefficient of Traffic Manipulation–to quantify news spread patterns. We also demonstrate how our system can discover URLs whose spread pattern deviate from the norm, and be used to coordinate human fact-checking of news domains. Our framework provides a readily usable solution for researchers to trace information across multiple platforms, to experiment with new indicators, and to discover low-quality news URLs in near real-time.


Innovation and Populism
Working Paper

Abstract
Innovation and technological progress has been partly blamed for the wave of populist success over the last decade, but the connection between technological progress and populism remains relatively unclear. In this manuscript, I utilize an exogenous shock to innovation to test the effect of innovation and technological progress on support for populist candidates and/or political parties in a country, Poland, that has witnessed a rapid rise in populism and technological progress over the last ten years. Drawing on an original geolocated data set of 496,617 patents and railroad placement and destruction during the Second World War in Poland, I exploit exogenous differences in railroad destruction during World War 2 as an instrument for future innovation and find that innovation actually reduces support for populist political candidates and parties. To explain this effect, this paper tests two theoretical mechanisms through which innovation affects support for populism and finds that innovation likely suppresses support for populism by increasing the mobility of populations, rather than increasing local human capital. This paper supposes that economic factors, such as innovation, affect support for populism, but not in the direction previously supposed.