How do we curb false news on social media without suppressing circulation of true information? In our (with Emeric Henry, Théo Marquis, and Ekaterina Zhuravskaya) paper “Curtailing False News, Amplifying Truth,” https://lnkd.in/dUwDwC6M just accepted in Econometrica, we study which policies actually work and why. Using large-scale randomized experiments during the 2022 and 2024 U.S. elections, we compare several widely discussed interventions that can slow down propagation of false news on social media: • fact-checking • confirmation clicks • prompts to assess content’s veracity and partisanship • a reminder about the spread of misinformation on social media The key finding is actually very simple: - A content-neutral reminder (“Please think carefully before you repost. Remember that there are a lot of false news circulating on social media.”) is the most effective intervention. - It reduces sharing of false news while preserving (and sometimes even increasing) sharing of true news. - Fact-checking also reduces reposting false news but at a cost: it discourages sharing of true information as well. Why does this happen? We combine the experiment with a structural model of sharing behavior. Our theory includes three motives for sharing: (i) reputation for credibility (I want to convince my audience I can distinguish false from true) (ii) political persuasion (I want my audience to move in the direction of my political views) (iii) political signalling (I want to communicate to the audience what my political views are) Ours model helps identifying and quantifying three mechanisms: - Updating beliefs (on veracity and partisanship of the content) - Salience of reputation (will I look credible?) - Costs of engagement (frictions, mental effort of reading/thinking/reposting) The evidence points clearly to one dominant channel: the most effective policies are those that make reputation concerns more salient, rather than those that try to “correct” beliefs. Our analysis produces non-trivial policy implications: • Light-touch interventions can outperform heavy-handed ones • Expensive fact-checking may be less effective than simple nudges and prompts We also show that digital literacy and behavioral nudges are complements—not substitutes. At a time when misinformation is a central policy concern, the message is encouraging: we don’t necessarily need to restrict content; we can shift behavior with a reasonably simple intervention, somewhat similar to health warnings on cigarette packs or alcoholic beverage bottles.
Using Data to Combat Election Misinformation
Explore top LinkedIn content from expert professionals.
Summary
Using data to combat election misinformation involves gathering and analyzing information to identify where and how false claims about elections spread online, then using those insights to counter them with accurate content. This approach combines research, monitoring, and proactive communication to protect public trust in democratic processes.
- Identify vulnerable audiences: Analyze online activity to discover which groups are most exposed to misleading election narratives and where these audiences spend their time.
- Anticipate and address gaps: Use data to spot areas with little reliable election information and develop targeted campaigns to fill those gaps before misinformation takes hold.
- Monitor and adapt: Track how false stories travel and adjust response strategies in real time, including using reminders, prebunking messages, or AI tools to prevent the spread of fake news.
-
-
Good news for democracy and countering misinformation: prebunking and credible source corrections increase election credibility. Check out our new study published by Science Advances: https://lnkd.in/e5CPQ3fW In this research project, we employed survey experiments with thousands of participants and focused on the so-called "electoral fraud" in the United States and Brazil due to the many parallels we could find in the countries, including false allegations by Trump and Bolsonaro when they tried to be reelected for the first time (2020 and 2022). Both in Brazil and the United States, credible sources and prebunking corrections increase electoral confidence and correct misperceptions about electoral fraud. In Brazil, prebunking was the most effective solution. These approaches almost always increased confidence in election results retrospectively and prospectively. What now? Our research suggests that factual information educating people about electoral systems (e.g., explaining how they work and how reliability is assessed) may be the primary way to protect democracy, prevent the spread of false allegations, and even address and correct misperceptions. Big thanks to the co-authors and study leaders, John Carey, Brian Fogarty, Brendan Nyhan & Jason Reifler. Centre for Media and Journalism Studies (RUG)
-
Dug deep into Russian IO this week and combed through this today. What makes the Doppelgänger campaign stand out isn’t just scale or ambition. It’s the precision-level technical obfuscation and the strategic deployment of AI that makes this an inflection point in how disinformation campaigns are designed and executed. Key Technical Tactics: -Multi-Stage Redirect Infrastructure: Doppelgänger uses first- and second-stage redirect domains (hosted with bulletproof Russian providers) that mask the final destination of influence content. These layers act as smokescreens for detection tools and make attribution significantly harder. -Use of Generative AI: Fake news outlets like Election Watch are populated with AI-generated content that mirrors the tone and structure of real news but lacks the nuance of human authorship. The goal isn’t just deception — it’s scale, speed, and cost-efficiency. -Thumbnail & Metadata Spoofing: Thumbnails and titles are set via HTML meta tags hosted on ephemeral domains like telegra.ph, obfuscating content previews on social media and further confusing detection heuristics. -Keitaro TDS Deployment: Doppelgänger tracks engagement and campaign performance using Keitaro Traffic Distribution System, allowing adaptive iteration and improved targeting — a clear sign this is not a fire-and-forget operation but a persistent, metrics-driven campaign. -Sockpuppet Networks: Over 2,000 inauthentic social accounts are being used to push content, with names styled to appear Western. These bots primarily engage via replies to hijack visibility in authentic threads, effectively mimicking organic engagement patterns. The low cost of content production enables campaigns at scale that were previously manpower-bound. These operations are modular and replicable. With shared infrastructure and common playbooks, this campaign could pivot to other narratives or regions with minimal overhead. How We Counter It: -Detonation-Based Detection: Static scans aren’t enough. We need dynamic analysis of links and redirects — including JavaScript behavior and metadata manipulation — to trace infrastructure footprints. -AI-Generated Content Identification: Platforms should integrate AI-detection into upload pipelines, flagging content that shows signs of generative authorship or linguistic repetition typical of LLMs. -Domain Monitoring + Brand Spoof Alerts: Media organizations must monitor for typosquatted domains and impersonation efforts in real time, using anomaly-based detection rather than keyword filters. -Narrative Prebunking Campaigns: Strategic comms teams should launch anticipatory counter-messaging, informed by threat intelligence, to inoculate audiences against likely disinformation spikes. The Doppelgänger operation signals a future where every malign narrative is one prompt away from global reach. If we don’t evolve our defense posture with equal agility, we’ll find ourselves defending the past while adversaries manipulate the future.
-
OK – It’s a bit of a day when I find myself reading Teen Vogue… and yet here we find the future generation of #journalism. How to fight Disinformation? Get the data to find where audiences are, and how those audiences spend their time, to find content vacuums that can be filled with accurate information to counter #misinformation narratives. Outstanding piece from Samuel Larreal on seeking out the young, male, #Latinx community where they are – digital gaming and sports spaces – and then developing campaigns to inform and engage with that potential block. Step 1) Define the Problem: A critical challenge for a voting demographic in the 2024 #elections. Campaigns micro-targeted at subsections of the Latino population in specific geographies about relevant issues – using false or misleading data to draw a link between increased immigration and criminality, and where malign narratives claim that migrants exploit the benefits system (see: https://lnkd.in/gURCs9Gz ). Examples from this #Disinformation problem: 1) A YouTube campaign ad in 2020 falsely claiming that the Venezuelan government supported Joe Biden. According to nonprofit newsroom ProPublica, the video was watched more than 100,000 times in Florida in the nine days leading up to the 2020 presidential election. Venezuelan president Nicolás Maduro didn’t endorse any of the candidates, but some voters may have believed what they saw. 2) Narratives that prey on Latino communities regarding: • Inflation (targeted at Argentinians and Venezuelans) • Abortion and reproductive rights (where most Latin Americans being Catholic) • Electoral fraud or rigged elections (alluding to past examples in Honduras, Nicaragua, Ecuador, and other countries). Step 2) Get the Data: Research that identifies a huge content vacuum, putting the young Latino male gaming population at risk from narratives peddled by the far right. Findings identify patterns in how gamers interact with each other online, and finds ways to mobilization steps so gamers can transcend the games they play, communicate in simple, informal, and casual ways, and to meet where they hang out. The Harmony Labs study ( https://lnkd.in/g3QmXkpa ) showed that young Latino men are less likely to see political content in their digital feeds – finding that information gap. Step 3) Design the Response: A campaign led by United We Dream Action ( https://lnkd.in/gSBNAcdh ) to reach Latinx gaming youth where they spend their hours on digital platforms. A great example for the #InternationalDevelopment and #MediaDevelopment sectors: Get the data – Identify the target audience – Define the problem – Find the information gaps – Fill those gaps with accurate content. #MediaLiteracy #FakeNews #SocialMedia https://lnkd.in/g_NYpU2y
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development