Catalog: Platforms, Algorithms, and Policy
How bad actors use social media to spread information, and what can be done about it.
This page is one part of the Prism Anti-Misinformation Resources Catalog. See the Table of Contents to navigate to other categories of resources.
Information Problems associated with Social Media
Post Post-Broadcast Democracy? News Exposure in the Age of Online Intermediaries (Sebastian Stier, Frank Mangold, Michael Scharkow, and Johannes Breuer via American Political Science Review)
This study combines the web browsing histories and survey responses of more than 7,000 participants from six major democracies to show that despite generally low levels of news use, using online intermediaries fosters exposure to nonpolitical and political news across countries and personal characteristics.
The spread of true and false news online (Soroush Vosoughi, Deb Roy, and Sinan Aral via Science)
Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. False news spreads more than the truth because humans, not robots, are more likely to spread it.
Comparing information diffusion mechanisms by matching on cascade size (Jonas L. Juul and Johan Ugander via PNAS)
Do some types of information spread faster, broader, or further than others? We demonstrate the essentiality of controlling for cascade sizes when studying structural differences between collections of cascades. We find that for false- and true-news cascades, the reported structural differences can almost entirely be explained by false-news cascades being larger. For videos, images, news, and petitions, structural differences persist when controlling for size.
Beware online "filter bubbles" (Eli Pariser via TED-Ed 2011)
As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our worldview.
Video: How social media filter bubbles work (CNN Business)
Ever feel like everyone on the internet thinks just like you do? It's likely because of a phenomenon called filter bubbles.
Neutral bots probe political bias on social media (Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Filippo Menczer via Nature Communications)
We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections.
Recommended Reading: Amazon’s algorithms, conspiracy theories and extremist literature (Elise Thomas via Institute for Strategic Dialogue)
The problems with algorithmic recommendation extend far beyond the social media platforms. At the core of this issue is the failure to consider what a system designed to upsell customers on fitness equipment or gardening tools would do when unleashed on products espousing conspiracy theories, disinformation or extreme views.
Ecosystem or Echo-System? Exploring Content Sharing across Alternative Media Domains (Kate Starbir, Ahmer Arif, Tom Wilson, Katherine Van Koevering,
Katya Yefimova, and Daniel Scarnecchia via University of Washington)
Demonstrates the role of alternative newswire-like services in providing content for alternative media websites.
Why People Share Conspiracy Theories Even When They Know They Are Untrue (Eugen Dimant Ph.D. via Psychology Today)
Many people are willing to make tradeoffs between sharing accurate information and sharing information that will generate more social engagement. People are sensitive to the social feedback they receive on social platforms. Positive feedback for sharing conspiracy theories powerfully influences what people share subsequently.
“If This account is True, It is Most Enormously Wonderful”: Interestingness-If-True and the Sharing of True and False News (Sacha Altay, Emma de Araujo, and Hugo Mercier via Digital Journalism)
Participants were more willing to share news they found more interesting-if-true, as well as news they deemed more accurate. They deemed fake news less accurate but more interesting-if-true than true news, and were more likely to share true news than fake news.
“Interesting if true”: A factor that helps explain why people share misinformation (Nieman Lab)
A new study in Digital Journalism explores that hypothetical by introducing this concept of interestingness-if-true — the quality of how interesting a piece of news would be if it were true — and testing how it might be connected to other factors (such as the perceived accuracy of a news item) that help explain why people might share news online, true or otherwise.
Negative news dominates fast and slow brain responses and social judgments even after source credibility evaluation (Julia Baumab and Rasha Abdel Rahman via NeuroImage)
Electrophysiological indexes of fast emotional and arousal-related brain responses, as well as correlates of slow evaluative processing were enhanced for persons associated with positive headline contents from trusted sources, but not when positive headlines stemmed from distrusted sources. In contrast, negative headlines dominated fast and slow brain responses unaffected by explicit source credibility evaluations.
Social Motives for Sharing Conspiracy Theories (Zhiying (Bella) Ren, Eugen Dimant, and Maurice E. Schweitzer via SSRN)
Recent work suggests that people share misinformation because they are inattentive. Across three preregistered studies (total N=1,560 Prolific workers), we show that people also knowingly share misinformation to advance social motives. We find that when making content sharing decisions, people make calculated tradeoffs between sharing accurate information and sharing information that generates more social engagement.
Birds of a feather are persuaded together: Perceived source credibility mediates the effect of political bias on misinformation susceptibility (Cecilie SteenbuchTraberg and Sander van der Linden via Personality and Individual Differences)
Source credibility mediated the effect of ideology on misinformation judgements. Political source similarity increased misinformation susceptibility. Political source incongruence increased resistance to believing facts. Liberals' news judgements were more affected by source slant than conservatives'. Both sides of the spectrum judged politically similar sources to be less slanted.
Why Don’t We Learn from Social Media? Studying Effects of and Mechanisms behind Social Media News Use on General Surveillance Political Knowledge (Patrick F. A. van Erkel and Peter Van Aelst via Political Communications)
Unlike following news via traditional media channels, citizens do not gain more political knowledge from following news on social media. There is evidence for a negative association between following the news on Facebook and political knowledge. This lack of learning on social media is not due to a narrow, personalized news diet but it is because following news via social media increases a feeling of information overload, which decreases what people actually learn, especially for citizens who combine news via social media with other news sources.
The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources (Savvas Zannettou, Tristan Caulfield, Emiliano De Cristofaro, Nicolas Kourtellis, Ilias Leontiadis, Michael Sirivianos, Gianluca Stringhini, and Jeremy Blackburn)
A study on how mainstream and alternative news flows between Twitter, Reddit, and 4chan. Alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that “fringe” communities often succeed in spreading alternative news to mainstream social networks and the greater Web.
How “engagement” makes you vulnerable to manipulation and misinformation on social media (Nieman Lab)
“The heart of the matter is the distinction between provoking a response and providing content people want.”
The Facebook Files, a Podcast Series (The Wall Street Journal)
This series offers new details about Facebook’s algorithm, how criminals use the platform for human trafficking and how Instagram affects mental health. They also reveal that what Facebook has told the public often isn’t the full story.
Interactions with Potential Mis/Disinformation URLs Among U.S. Users on Facebook, 2017-2019 (Aydan Bailey, Theo Gregersen, and Franziska Roesner via University of Washington)
Facebook posts containing potential and known mis/disinformation URLs drew substantial user engagement. Older and more politically conservative U.S. Facebook users were more likely to be exposed to (and ultimately re-share) potential mis/disinformation, but those users who were exposed were roughly equally likely to click regardless of demographics.
Clickbait is Unreasonably Effective (Veritasium via YouTube)
The title and thumbnail play a huge role in a video's success or failure on YouTube.
How social learning amplifies moral outrage expression in online social networks (William J. Brady, Killian McLoughlin, Tuann M. Doan, and Molly J. Crockett via Science Advances)
Positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. Users conform their outrage expressions to the expressive norms of their social networks, suggesting norm learning also guides online outrage expressions. Norm learning overshadows reinforcement learning when normative information is readily observable: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage.
Hearing on “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation" (U.S. House of Representatives)
The Subcommittee on Communications and Technology and the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce held a joint hearing on Thursday, March 25, at 12 p.m, via Cisco Webex.
Congressional Testimony: Dr. Joan Donovan at HPSCI hearing, “Misinformation, Conspiracy Theories, and Infodemics: Challenges and Opportunities for Stopping the Spread Online,” 15 October 2020 (Joan Donovan, PhD, Research Director at Harvard Kennedy School)
EXCERPT: “For years, tech companies argued there is no reason to stop the viral spread of misinformation and conspiracies because everyone has a right to share their own beliefs. Data suggests the public disagrees, as the majority of users surveyed claim the platforms are not doing enough to fight abuse and disinformation on their platforms.”
Congressional Testimony: Dr. Joan Donovan at Energy and Commerce Committee hearing, “Americans at Risk: Manipulation and Deception in the Digital Age,” 5 December 2019 (Joan Donovan, PhD, Research Director at Harvard Kennedy School)
EXCERPT: “Individuals and groups can quickly weaponize social media to cause others financial and physical injury… Specific features of online communication technologies need regulatory guardrails to prevent them from being used for manipulative purposes.”
Overcoming Obstacles to Exposing Disinformation Actors (EU Disinfo Lab)
Black-box decision-making prevents researchers from scrutinising platform activity. Inconsistencies prevail in how the largest platforms responded to our findings on disinformation networks, ranging from no response to partial action taken behind closed doors. Unequal enforcement continues across the EU based on language and region. Policy infringements going unanswered.
_______________
Bad Algorithms, Bad Actors, and Bad Ads
Hoaxy (University of Indiana)
Visualize the spread of information on Twitter: Live search any Twitter content, Article search Twitter links to low-credibility and fact-checking sources, and import data as CSV or JSON file containing Tweet information.
A Systematic Review on Fake News Themes Reported in Literature (Marié Hattingh, Machdel Matthee, Hanlie Smuts, Ilias Pappas, Yogesh K. Dwivedi, and Matti Mäntymäki via Responsible Design, Implementation and Use of Information and Communication Technology)
This literature review takes an early initiative to identify the possible reasons behind the spreading of fake news. The purpose of this literature review is to identify why individuals tend to share false information and to possibly help in detecting fake news before it spreads.
Metrics & Transparency: Data and Datasets to Track Harms, Design, and Process on Social Media Platforms (Integrity Institute)
This guides explains a comprehensive set of transparency requirements to enable the public to understand the scale and cause of harms
occurring on social media platforms and to validate that social media companies are using best practices in responsibly designing and building their platforms. It also provides the required baseline understanding of how content is distributed on platforms.
How Misinformation ‘Superspreaders’ Seed False Election Theories (New York Times)
Researchers have found that a small group of social media accounts are responsible for the spread of a disproportionate amount of the false posts about voter fraud.
Bots in the Twittersphere (Pew Research Center)
An estimated two-thirds of tweeted links to popular websites are posted by automated accounts – not human beings.
Botometer (University of Indiana)
Checks the activity of a Twitter account and gives it a score. Higher scores mean more bot-like activity.
The Media Manipulation Casebook (Joan Donovan, PhD)
Using the Life Cycle of Media Manipulation, each case study in this casebook features a chronological description of a media manipulation event, which is filtered along specific variables such as tactics, targets, mitigation, outcomes, and keywords.
Hearing on “Fanning the Flames: Disinformation and Extremism in the Media" (U.S. House of Representatives)
The Subcommittee on Communications and Technology of the Committee on Energy and Commerce held a hearing on Wednesday, February 24, 2021, at 12:30 p.m., via Cisco WebEx.
The Global Disinformation Order 2019 Global Inventory of Organised Social Media Manipulation (Computational Propaganda Project)
This inventory highlights the ways in which government agencies and political parties have used social media to spread political propaganda, pollute the digital information ecosystem, and suppress freedom of speech and freedom of the press. While the affordances of social media can serve to enhance the scale, scope, and precision of disinformation (Bradshaw and Howard 2018b), it is important to recognize that many of the issues at the heart of computational propaganda – polarization, distrust or the decline of democracy – have existed long before social media and even the Internet itself.
Political Advertising on Platforms in the United States: A Brief Primer (Bridget Barrett, Daniel Kreiss, Ashley Fox, and Tori Ekstrand via University of North Carolina at Chapel Hill)
This report documents the policies and advertising targeting capabilities of major, easily-accessible digital advertising platforms across over a dozen categories. It outlines five key takeaways and details what they mean for future US elections.
Citizen Browser (The Markup)
Coverage of social media, platforms, and advertising.
Platformer newsletter (via Substack)
News at the intersection of Silicon Valley and democracy.
Political advertising in the United States (Google)
Election ads in this report feature a current officeholder or candidate for an elected federal or state office, federal or state political party, or state ballot measure, initiative, or proposition that qualifies for the ballot in a state. The report also includes all ads from advertisers that completed the express notification process related to California candidates for elected office or California ballot measures.
Ad Library (Facebook)
A comprehensive, searchable collection of all ads currently running from across Facebook apps and services, including Instagram.
Ad Library Report (Facebook)
Explore, filter, and download data for ads about social issues, elections, or politics. See overall spending totals, spending by specific advertisers, and spend data by geographic location.
NYU Ad Observatory (New York University)
[As of September 2021: “Facebook has effectively stalled the Ad Observatory project by suspending Facebook accounts of Cybersecurity for Democracy team members. Lawmakers, regulators, and civil society groups are stepping up to support this project.”]
Ads transparency (Twitter)
An archived version of Twitter’s Ads Transparency Center data from all Political ads that ran between May 24, 2018 and November 22, 2019 and Issue ads that ran between August 08, 2018 and November 22, 2019.
Snap Political Ads Library (Snapchat)
Gives the public an opportunity to find out details about all political and advocacy advertising running on the Snap platform.
_______________
Countering Misinformation Online
Disrupting Disinformation (Center for Homeland Defense and Security)
In this webinar, technology and communications experts discuss the latest research in disinformation and actions local, state and federal leaders can take to counter the influence and impact of online and off-line disinformation.
This webinar is held in partnership with the Association of State and Territorial Health Officials (ASTHO), CHDS Alumni Association, International Association of Emergency Managers (IAEM), and the Naval Postgraduate School Alumni Association and Foundation.
Online Account Terminations/Content Removals and the Benefits of Internet Services Enforcing Their House Rules (Eric Goldman and Jess Miers via Journal of Free Speech Law)
This article reviews a dataset of U.S. judicial opinions involving Internet services’ user account terminations and content removals. The Internet services have prevailed in these lawsuits, which confirms their legal freedom to enforce their private editorial policies (“house rules”). Numerous regulators have proposed changing the legal status quo and restricting that editorial freedom. Instead of promoting free speech, that legal revision would counterproductively reduce the number of voices who get to speak online. As a result, laws imposing “must-carry” requirements on Internet services will exacerbate the problem they purport to solve.
Digital Policy Lab ’20 – Companion Papers (Institute for Strategic Dialogue)
Policy briefs and discussion papers for the first session of the DPL: Transparency, Data Access and Online Harms; National & International Models for Online Regulation; The EU Digital Services Act & the UK Online Safety Bill; The Liberal Democratic Internet – Five Models for a Digital Future; and Future Considerations for Online Regulation. The DPL is a new inter-governmental working group focused on charting the regulatory and policy path forward to prevent and counter disinformation, hate speech, extremism and terrorism online.
Cross-platform Information Operations: Mobilizing Narratives and Building Resilience Through Both ‘Big’ and ‘Alt’ Tech (@katestarbird and Tom Wilson via University of Washington)
False content is produced, stored, and integrated into the Twitter conversation from networks of social media platforms. Underpinning these efforts is the work of resilience-building: the use of alternative (non-mainstream) platforms to counter perceived threats of ‘censorship’ by large, established social media platforms.
Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter (Shagun Jhaver, Christian Boylston, Diyi Yang, and Amy Bruckman)
We found that deplatforming significantly reduced the number of conversations about three individuals with sizable audiences on Twitter. The overall activity and toxicity levels of supporters declined after deplatforming. We contribute a methodological framework to systematically examine the effectiveness of moderation interventions and discuss broader implications of using deplatforming as a moderation strategy.
Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels (Manoel Horta Ribeiro, Shagun Jhaver, Savvas Zannettou, Jeremy Blackburn, Gianluca Stringhini, Emiliano de Cristofaro, and Robert West)
We analyze data from two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community.
Content moderation avoidance strategies (Rachel E. Moran, Kolina Koltai, Izzi Grasso, Joseph Schafer, and Connor Klentschy via University of Washington)
Vaccine-opposed communities circumvent the community guidelines and moderation features of social media platforms through lexical variation (such as using “V@cc1ne”), covering up potentially rule-triggering images and text, and using ephemeral platform features such as Instagram stories to spread vaccine misinformation. Actions taken by platforms to remove COVID-19 vaccine misinformation fail to counter the range of avoidance strategies vaccine-opposed groups deploy.
Does the platform matter? Social media and COVID-19 conspiracy theory beliefs in 17 countries (Yannis Theocharis, Ana Cardenal, and Soyeon Jin via New Media & Society)
Twitter has a negative effect on conspiracy beliefs—as opposed to all other platforms under examination which are found to have a positive effect.
The Bipartisan Case for Labeling as a Content Moderation Method: Findings from a National Survey (John Wihbey, Garrett Morrow, Myojung Chung, and Mike Peacey)
This study finds relatively strong, bipartisan support for the basic strategy and general goals of labeling.
Misinformation interventions are common, divisive, and poorly understood (Emily Saltz, Soubhik Barari, Claire Leibowicz, and Claire Wardle via Harvard Kennedy School Misinformation Review)
Most Americans are not well-informed about what kinds of systems, both algorithmic and human, are applying interventions online and attribute errors to biased judgment more than any other cause, across political parties. Support for interventions differs considerably by political party.
Intervening on Trust in Science to Reduce Belief in COVID-19 Misinformation and Increase COVID-19 Preventive Behavioral Intentions: Randomized Controlled Trial (Jon Agley, Yunyu Xiao, Esi E Thompson, Xiwei Chen, and Lilian Golzarri-Arroyo via Journal of Medical Internet Research)
Briefly viewing an infographic about science appeared to cause a small aggregate increase in trust in science, which may have, in turn, reduced the believability of COVID-19 misinformation. The effect sizes were small but commensurate with our 60-second, highly scalable intervention approach.
Fake news game confers psychological resistance against online misinformation (Jon Roozenbeek & Sander van der Linden via Palgrave Communications)
We provide initial evidence that people’s ability to spot and resist misinformation improves after gameplay, irrespective of education, age, political ideology, and cognitive style.
Scaling up fact-checking using the wisdom of crowds (Jennifer Allen, Antonio A. Arechar, Gordon Pennycook, and David G. Rand via Science Advances)
The average ratings of small, politically balanced crowds of laypeople 1) correlate with average fact-checker ratings as well as the fact-checkers’ ratings correlate with each other and 2) predict whether the majority of fact-checkers rated a headline as “true” with high accuracy. Cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checkers, and identifying each headline’s publisher leads to a small increase in agreement with fact-checkers.
_______________
Strategic Misinformation Policy
The Information Intervention Chain (Mike Caulfield via Hapgood)
There are four places where information interventions can be applied: Moderation/promotion, interface, individual, and social.
The Information Intervention Chain: Interface Layer Example (Mike Caulfield via Hapgood)
The work on each part of the Information Intervention Chain decreases the load on the layers below it, and helps cover some of the errors not caught in the layers above. This example illustrates this dependency.
Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools (Anastasia Kozyreva, Stephan Lewandowsky, and Ralph Hertwig via Psychological Science in the Public Interest)
The online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. Benevolent choice architects working with regulators may curb the worst excesses of manipulative choice architectures, yet the strategic advantages, resources, and data remain with commercial players. One way to address some of this imbalance is with interventions that empower Internet users to gain some control over their digital environments, in part by boosting their information literacy and their cognitive resistance to manipulation.