Democratic Processes

Explore top LinkedIn content from expert professionals.

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,450 followers

    "Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI-generated and authentic content." By David Evan Harris and Aaron Shull of the Centre for International Governance Innovation (CIGI).

  • View profile for Adam Fivenson

    Democracy, the information space, climate and technology

    5,948 followers

    Authoritarian regimes — Russia, China, Iran — have dramatically escalated hybrid warfare over the past decade. Every drone incursion, ransomware attack, and effort to manipulate information shares a unified purpose: eroding public trust in democratic institutions, dividing liberal societies, and weakening the international order. And it's working. In a new report "No Time to Lose: Liberal Democracies Can Win the Cognitive and Hybrid War Against Authoritarians" Sasha Havlicek, David Salvo, and Dr. Arndt Freytag von Loringhoven lay out why democratic governments' current response, largely reactive and fragmented across siloed ministries, is insufficient to the scale of the threat. Democracies have moved from ignoring the challenge to naming it. But naming it isn't enough. We must leverage democracies advantages to retake the strategic advantage. The authors offer three core recommendations: 👉 Establish a counter-hybrid doctrine — one that signals a genuine willingness to respond asymmetrically, develop offensive capabilities, and map adversaries' vulnerabilities, not just defend against them. 👉 Reorganize government to address hybrid threats holistically — appointing lead agencies, creating cross-domain intelligence coordination, and building informal "coalitions of the willing" outside traditional NATO/EU channels when necessary. 👉 Put strategic communications front and center — trumpeting defensive successes, making the human costs of hybrid attacks legible to citizens, and empowering trusted non-governmental voices (mayors, faith leaders, influencers) to build societal resilience. In 2026, the cognitive domain is the decisive battleground. Democracies have the tools to compete — but only if they treat this as the strategic challenge it is. Link to the full report (a quick 10 minute read) in comments and check out the briefer video below that I generated in NotebookLM. #DisinformationResearch #HybridWarfare #InformationIntegrity #FIMI #DemocraticResilience #StrategicCommunications

  • View profile for Raquel Vazquez Llorente

    AI Policy & Governance♦️Bridging Emerging Tech, Safety & Regulation♦️Before: Deepfakes, Provenance & Digital Evidence in Crises + Armed Conflicts♦️Human Rights Lawyer

    4,344 followers

    This year, through conversations with journalists, activists, creators, policy-designers and technologists about #deepfakes and #generativeAI, we've gained profound insights at WITNESS. In my latest piece for the Council on Foreign Relations, I share learnings based on our work with communities defending democracy at the frontlines, and I outline ways to safeguard #2024elections worldwide. 💡Bonus track 💡 ➡ Examples of how, with appropriate disclosure, generative AI can be used positively in the context of #elections. [Full text on link] The effects of synthetic media on #democracy are a mix of new, old, and borrowed challenges: 🆕 Inconvenient truths can be denied as deepfaked. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. 🧓 AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Non-consensual sexual deepfakes can have an additional chilling effect, eroding the diversity and representativeness that are essential for a healthy democracy. ♻ Much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies. As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? A roadmap: 1️⃣ We must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation. 2️⃣ We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital. 3️⃣ The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem. Thanks to Kat Duffy and Kyle Fendorf for publishing the piece. https://lnkd.in/ewp8Sper #UDHR75 #UDHR

  • View profile for Matt O’Neill

    Co-Founder/Partner at 5OH Consulting LLC. Board Member at Cognyte.

    6,232 followers

    This week, I had the privilege of testifying before Congress on how combating financial crime is critical to safeguarding our elections and democracy. Foreign actors are exploiting vulnerabilities in our financial systems—using shell companies, crowdfunding platforms, and cryptocurrencies to fund election interference. These tactics aren’t just undermining democracy; they’re part of a broader epidemic of financial crime that cost Americans $12.5 billion in fraud losses last year alone and launders an estimated $2 trillion globally each year. Here’s what’s needed to close the gaps:  ✔️ Expand 314(b) protections: Include crowdfunding sites, FinTech platforms, and crypto exchanges, paired with safe harbor to encourage information sharing.  ✔️ Incentivize collaboration: Offer tax breaks or grants for financial institutions and tech companies that actively participate in sharing intelligence.  ✔️ Close regulatory gaps: Mandate AML and KYC compliance across emerging payment systems.  ✔️ Leverage advanced technology: Use AI to detect fraud while adopting privacy-preserving tools like fully homomorphic encryption. Solving financial crime doesn’t just protect consumers—it disrupts election interference and strengthens our democracy. Together, we can safeguard both our financial systems and electoral integrity. https://lnkd.in/eNxeQk6U 5OH Consulting LLC Christine O'Neill

  • View profile for Jody Freeman

    Professor of Law at Harvard Law School, writing in my individual capacity

    11,327 followers

    ☀️ Practical, concrete things you can do to help protect elections in 2026 through action or donation. This is non partisan. ☀️ Please read, share, act, give, do. 👇 "Work the polls. In 2024, nearly half of the country’s election precincts said they struggled to recruit poll workers. Shortages lead to longer voting lines and overworked election administrators. We particularly encourage young and middle-aged people to sign up: In the last three general elections, fewer than a quarter of volunteers were 40 or younger. (In many states, you can become a poll worker at 16 or 17.) The positions, which are nonpartisan, are paid. "Watch the polls. In most states, political parties appoint poll watchers who observe elections to ensure fairness. In some places, nonpartisan groups can also select poll watchers. To find out how to volunteer, start by contacting the Democratic or Republican committee in your county. "Don’t spread dubious information. It’s surprisingly easy for misleading stories to travel through trusted friends or relatives. Heather Gerken, president of the Ford Foundation, notes that influential disinformation often arrives via a well-meaning peer rather than a random bot. People from both political sides are susceptible to this — whether it was Democratic conspiracy theories in 2004 about fraud in Ohio or the recent Republican conspiracy theories about 2020. Double-checking information before hitting “share” can keep election conversations grounded in reality. "You can also support organizations that are helping to protect election officials and safeguard the process. All the ones we recommend here are nonpartisan. "The Election Official Legal Defense Network pairs election officials with pro bono attorneys who can advise them on how to respond to threats and lawsuits, which have increased in recent years. The network was founded in 2021 by a former White House lawyer for Barack Obama and a former election lawyer for George W. Bush. "The Campaign Legal Center, founded in 2002 by a Republican lawyer who served as chairman of the Federal Election Commission, works to ensure that election rules remain fair. The group is fighting the Trump administration’s demands for voter data and an executive order that would force states to change voter ID requirements and ballot deadlines. There will probably be more litigation ahead of the midterms, especially after a recent Supreme Court decision made it easier for any candidate to challenge election rules. "The Carter Center, founded in 1982 by Jimmy Carter, has brought its global election monitoring program to the United States. This year, the Carter Center plans to have nonpartisan observers watch elections in Georgia, Michigan, Montana, Nevada and New Mexico. The center will also host civic education events and offer resources for voters and election officials across the country." 👆Compiled by the NYT. https://lnkd.in/gqUqKp7q

  • Disinformation is a "wicked problem"—complex, multi-faceted, and challenging to counter without risking unintended consequences. Tackling it with a “do no harm” policy approach requires nuanced, adaptable strategies that respect freedom of expression and reinforce the foundations of democratic governance. In my mid-career Master’s in Public Policy at Princeton School of Public and International Affairs I've encountered this excellent Carnegie Endowment for International Peace policy guide. It offers actionable, balanced approaches based on evidence and case studies that can truly boost policy approaches to counter disinformation. 💡 Key strategies include: Empowering Local Journalism: When local news sources disappear, disinformation spreads like wildfire. Strengthening local journalism revives civic trust, keeps communities informed, and builds a first line of defense against disinformation. #DemocracyDiesInDarkness Building Media Literacy: Teaching critical media skills across communities and schools equips individuals to spot manipulation and build resilience against false information. Prioritizing Transparency with Fact-Checking: Going beyond labels, fact-checking that promotes transparency enables audiences to make informed choices, fostering trust without policing beliefs. Adjusting Algorithms & Limiting Microtargeting: Creating healthier online spaces by limiting microtargeted ads and rethinking algorithms reduces echo chambers while respecting autonomy. Counter-Messaging with Local Voices: Developing counter-messaging strategies that engage trusted community voices enables us to challenge false narratives effectively and authentically. These approaches are essential for defending open dialogue, strengthening governance, and supporting sustainable development. It's all hands on deck! https://lnkd.in/egKKmAqh 🌐 #Disinformation #DoNoHarm #LocalJournalism #FreedomOfExpression #PublicPolicy #CivicTrust cc Melissa Fleming Charlotte Scaddan Rosemary Kalapurakal Alice Harding Shackelford Roberto Valent Allegra Baiocchi (she/her/ella) Danilo Mora Carmen Lucia Morales Liliana Liliana Garavito George Gray Molina Marcos Neto Kersten Jauer

  • View profile for Danny Pehar

    Inspire Action, Drive Change, Secure Futures

    5,242 followers

    🌐🔒 Protecting Democracy in the Digital Age 🔒🌐 The recent alert from Canada's cyber espionage agency, the Communications Security Establishment (CSE), serves as a stark reminder of the evolving landscape of election interference. 🚀🗳️ In a world increasingly reliant on technology, bad actors are planning to leverage artificial intelligence tools to manipulate public opinion during the next federal election. 🤖📢 Here are a few key takeaways from the CSE's report and some tips on how to stay vigilant: 1️⃣ Deepfake Dangers: The CSE warns that foreign adversaries and hacktivists are likely to employ generative AI to create deceptive deepfake videos and images featuring politicians and government officials. These convincing fakes can sway public sentiment and erode trust. 💔 2️⃣ Volume Challenge: With the sheer volume of deepfake content expected to flood the digital space, identifying and mitigating these threats will be a Herculean task for authorities. This is a wake-up call for everyone to be cautious consumers of online information. 🧐 3️⃣ Critical Thinking: To defend against misinformation, we must cultivate critical thinking skills. Verify the sources of information, cross-check facts, and remain skeptical of sensational content. 🤨🔍 4️⃣ Media Literacy: Promoting media literacy is essential. Educate yourself and others about the technology behind deepfakes and their potential consequences. Knowledge is our best defense. 📚💪 5️⃣ Report Suspicious Content: If you come across suspicious content or suspect deepfake manipulation, report it to relevant authorities. Vigilance from citizens is a vital part of safeguarding our democracy. 🚨👥 In this digital age, we all have a role to play in preserving the integrity of our elections. Let's stay informed, vigilant, and united against the misuse of technology to undermine democracy. 🗳️🇨🇦 #ElectionIntegrity #CyberSecurity #Deepfakes #cybersecurityawareness #AI

  • View profile for Eyvind Lyberth Nielsen

    Care worker by profession | Geopolitical writer by vocation | U.S., NATO & Arctic systems

    2,922 followers

    The rise of right-wing populist movements across Europe, mirroring the MAGA movement in the United States, presents a critical challenge to liberal democracies. Leaders like Giorgia Meloni, Marine Le Pen, AfD, Vox, and the Sweden Democrats exploit nationalism, anti-immigration sentiments, and EU skepticism to undermine democratic institutions. The unchecked influence of populist figures and tech moguls, particularly Donald Trump and Elon Musk, amplifies extremist narratives and erodes public trust in democratic systems. Here we explore how their strategies destabilize societies and outlines proactive regulatory measures to protect democracy. Through stringent digital platform regulation, counter-disinformation campaigns, democratic tech innovation, economic leverage, global alliances, and empowering civil society, democracies can neutralize these threats and reinforce resilience. #DefendDemocracy #RegulateTech #FightPopulism #DigitalAccountability #GlobalUnity

  • View profile for G Craig Vachon

    Founder (and Student)

    5,997 followers

    The growth and exploitation of (so-called) “contextual credibility” enables the creation of "alternative realities" and undermines factual discourse. And it is creeping into our business world right now. (Let me tell you about an absurd pitch I just witnessed). To shift back towards credible interactions, we must focus on several key areas. Critical Thinking: We must invest in comprehensive media literacy programs to teach critical evaluation of information, identification of biases, and recognition of manipulation tactics (to all ages and demographics). It's vital to promote source verification by encouraging the use of fact-checking tools and cross-referencing information from reputable sources. Journalistic Standards: Supporting independent investigative journalism that holds power accountable is crucial. We need to advocate for greater transparency in media ownership to reveal potential biases and establish mechanisms to hold misbehaving media outlets accountable for misinformation. Evidence-Based Reasoning: Increasing public understanding of the scientific method and evidence-based reasoning is essential. Fostering open dialogue where diverse perspectives are engaged and evidence-based arguments are promoted is vital. We must also develop strategies to combat misinformation on social media, including fact-checking and user education. (I propose something like eBay’s credibility tool.) Institutions and Legal Frameworks: Protecting the independence of the judiciary is paramount. We should explore legal frameworks addressing harmful misinformation while safeguarding free speech, focusing on laws that target deliberate dissemination of false information. Strengthening freedom of information laws and promoting government transparency is additionally necessary. Critical Thinking in AI Development: Ensuring transparent AI development, preventing AI from spreading misinformation, and developing AI tools for fact-checking and source validation are critical. Training AI/LLMs on garbage misinformation will only create equally corrupt resultant. Combating misinformation requires a multi-faceted approach and a societal shift towards valuing evidence-based reasoning to protect the integrity and progress of humanity.

Explore categories