Combating Disinformation

Explore top LinkedIn content from expert professionals.

  • View profile for Roberta Boscolo
    Roberta Boscolo Roberta Boscolo is an Influencer

    Climate & Energy Leader at WMO | Earthshot Prize Advisor | Board Member | Climate Risks & Energy Transition Expert

    173,823 followers

    🌍 Ten Years After Paris: is the Climate Crisis a Disinformation Crisis? In 2015, the world made a historic promise: to keep global warming well below 2°C, and ideally below 1.5°C. We committed to major emission cuts by 2030, and net-zero by 2050. The Paris Agreement marked a new era of global climate cooperation. But ten years on, we're still struggling with cooperation while the World Meteorological Organization tells us that the Earth’s average temperature exceeded 1.5°C over a 12-month period (Feb 2023–Jan 2024) for the first time. Why? 🔍 A groundbreaking new study, led by 14 researchers for the International Panel on the Information Environment, reviewed 300 studies from 2015–2025. The findings are alarming: powerful interests – fossil fuel companies, populist parties, even some governments – are systematically spreading misleading narratives to delay climate action. 🧠 Misinformation isn't just about denying climate change. It’s now about strategic skepticism – minimizing the threat, casting doubt on science-based solutions, and greenwashing unsustainable practices. 📺 This disinformation flows through social media, news outlets, corporate reports, and even policy briefings. It targets all of us – but especially policymakers, where it can shape laws and delay critical decisions. 💡 So what can we do? 1️⃣ Legislate for transparency and integrity in climate communication. 2️⃣ Hold greenwashers accountable through legal action. 3️⃣ Build global coalitions of civil society, science, and public institutions. 4️⃣ Invest in climate and media literacy for both citizens and leaders. 5️⃣ Amplify voices from underrepresented regions – like Africa – where more research is urgently needed. We must protect not only the planet’s climate, but the integrity of climate information. 🔗 Read more on how disinformation is undermining climate progress – and what we can do about it: https://lnkd.in/eDN9hKAJ 🕰️ The window is small. But with truth, science, and collective action, we can still turn the tide.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,869 followers

    Article 52 of the AI Act is a pivotal piece of legislation designed to mitigate the risks associated with synthetic media. Art 52 directly addresses the challenges posed by generative AI (GPAI) systems that produce audio, image, video, or text content, demanding that such outputs be clearly marked as artificially generated or manipulated. This requirement aims to preserve the integrity of digital content and protect against the deceptive uses of AI, particularly deepfakes, which have demonstrated significant potential for harm in political, social, and personal contexts. The provision mandates that providers of AI systems ensure the detectability of AI-generated content through marking in a machine-readable format. This obligation extends to ensuring the effectiveness, interoperability, robustness, and reliability of these technical solutions, where technically feasible. For AI systems generating or manipulating deepfake content, there is a requirement to disclose the artificial nature of such content. This transparency obligation extends to content intended to inform the public on matters of public interest, requiring disclosure when text has been artificially generated or manipulated, with exceptions for content that has undergone human review and where editorial responsibility is established. Building on the principles outlined in Article 52 of the AI Act, initiatives like Adobe’s Content Credentials (CR mark) represent significant advancements in the practical implementation of these provisions. The CR mark, a cornerstone of the Coalition for Content Provenance and Authenticity (C2PA), offers a visual indicator that signals the provenance of digital media. This initiative is aimed at combatting disinformation by enabling users to easily identify and verify content that adheres to technical standards of authenticity. Adobe’s approach allows for a “digital nutrition label” that provides verified information about the content, including the publisher or creator’s details, creation date and location, the tools used (specifically indicating the use of generative AI), and any edits made. Meta’s response to the challenge of AI-generated misinformation further illuminates the complexities involved in regulating and labeling synthetic content. Despite announcing a plan to label AI-generated content created using popular generative AI tools and the development of tools to identify invisible markers, Meta’s efforts have been criticised for their limitations. The efficacy of Meta’s watermarking plan is questioned due to the ease with which watermarks can be removed and the reliance on bad actors using tools that comply with watermarking standards. The shortcomings of Meta’s approach underscore a broader challenge: the need for more indelible watermarking technologies that can embed authentication information directly into the content’s pixels or data structure, making it considerably more difficult to remove or alter without detection.

  • View profile for Claes de Vreese

    University Professor University of Amsterdam | Director DDC SDU

    9,658 followers

    A few facts and thoughts on fact-checking: ·      Fact-checking is NOT in opposition to free speech and debates. It is a significant contribution to public discourse. ·      Fact-checking works. Research shows how fact-checking can affect public perceptions and contribute to important corrective mechanisms. ·      Fact-checking is often challenged by limited scope and reach, i.e. too few people see important fact-checks. In part because platforms and media prioritize and distribute them insufficiently. ·      Fact-checking is not a panacea. But it is an important instrument in a democracy where facts and correct information are essential.  ·      Fact-checking, when done well, is a professional activity with standards, accountability, and procedures, see fx. European Fact-Checking Standards Network (EFCSN) ·      Fact-checking has, sadly, been too dependent on big tech funding. This vulnerability is exposed with Meta’s irresponsible shutdown of its fact-checking support programs. ·      Fact-checking cannot be replaced by Community Notes. That is a different instrument, with other qualities, but lacking in central features that are the strengths of fact-checking. ·      Fact-checking is an important part of the information eco-system. Removing it from the Meta universe is a step in the wrong direction and does not contribute to being compliant with EU regulations such as the #DSA

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    52,731 followers

    Cyber Smart from the Start: Defending Finland’s Future in the Classroom Finland has long been celebrated for its world-class education system and commitment to digital innovation. But as technology becomes increasingly entwined with everyday life, new challenges are emerging—especially for the next generation. The rise of misinformation, cyberbullying, and online fraud means that teaching traditional subjects is no longer enough. Today’s students must be equipped with the tools to think critically, act safely, and defend themselves in the digital world. Disinformation campaigns, particularly from hostile foreign actors like Russia, have become more frequent and more sophisticated. These campaigns are not limited to military or political targets—they affect everyday citizens, manipulating emotions, distorting facts, and undermining democratic values. Finnish students must be taught how to recognize propaganda, question suspicious sources, and resist the temptation to share unverified information. But media literacy alone won’t cut it. Our young people also need to understand personal cybersecurity—from using secure passwords and avoiding phishing scams, to managing their online identity and digital footprint. By integrating cybersecurity and disinformation awareness into the national curriculum, we can ensure that Finnish students grow up not just smart, but cyber smart—ready to protect themselves, and their country, from the digital threats of today and tomorrow. #cybersecurity #education #Finland #CyberHygiene #misinformation #disinformation #PrimarySchool #SecondarySchool #privacy #WhyCantWeDoThatHere #democracy

  • View profile for Alex Edmans
    Alex Edmans Alex Edmans is an Influencer

    Professor of Finance, non-executive director, author, TED speaker

    70,803 followers

    Misinformation isn’t just about false facts—it’s also about misleading ones. Even if a fact is 100% true, it can still be unreliable: ⚠️ A single anecdote can be paraded as proof. ⚠️ An exception can be framed as the rule. ⚠️ A correlation can be mistaken for causation. In May Contain Lies, I explain how we can protect ourselves from misinformation. Here's how: 🧠 Step 1: Recognize That You Already Have the Tools We don’t need a PhD in statistics to think critically. Whenever a study is posted on LinkedIn that people don't like, there's no shortage of comments on why correlation is not causation, or why the example may be cherry-picked. The real challenge? Ensuring we use the same discernment for a study we do like as for one we don't. 🔍 Step 2: Beware of Confirmation Bias We latch onto whatever interpretation of the facts confirms our view of the world. Take breastfeeding and IQ: 🍼 Studies show that breastfed babies often have higher IQs later in life. 🔬 One interpretation? Breastfeeding causes higher IQ. This makes sense: breastmilk is natural; formula is a UPF. 🤔 A more critical perspective? Family support might be the real factor—since breastfeeding is easier with strong family backing. ⚫⚪ Step 3: Avoid Black-and-White Thinking The world isn’t split into “always good” or “always bad.” 🥑 Fat sounds bad—because it’s called "fat." 💪 Protein sounds good—because it “builds muscle.” 🍞 Carbs? Neutral. But diets like Atkins claimed they were the enemy. 🔹 Reality? Science suggests that carbs are healthy when they make up 30–50% of daily calories. 🔹 But tracking exact percentages is tough—so simple rules gain traction, even when they’re scientifically weak. With black-and-white thinking, to sell an idea, you don’t need to be right. You just need to be extreme. ♻️ Step 4: Flip the Narrative If a claim supports your beliefs, imagine the opposite claim instead. Example: 🔴 If a study said breastfeeding lowers IQ, how would you try to debunk it? You'd appeal to alternative explanations: perhaps poorer families breastfeed (as they can't afford formula), and it's poverty not breastfeeding that causes the lower IQ. 🟢 Now ask if the same alternative explanation applies even though the results are in the direction you want. Might family background explain why breastfed babies have higher IQ? In short, challenge the evidence—not just the conclusion. 🤔 Step 5: Embrace Healthy Skepticism Questioning flawed research isn’t just intellectual nitpicking—it’s freedom. ✅ Parents can make feeding choices without guilt. ✅ People can eat carbs without fear-based restrictions. ✅ We all gain the confidence to navigate the world with clarity. We won't get it right 100% of the time—and in the book I explain many times I got it wrong. But the goal is not to be perfect, only better.

  • View profile for Marie-Doha Besancenot

    Senior advisor for Strategic Communications, Cabinet of 🇫🇷 Foreign Minister; #IHEDN, 78e PolDef

    40,983 followers

    🗞️ 🇬🇧 Lessons from the latest « Disinformation Diplomacy » report by the UK House of Commons : ▶️Democracies must move from fragmented, defensive counter-disinformation to coordinated, strategic use of information power. 🇬🇧 National recommendations : 1. Create a National Counter-Disinformation Centre A centralised structure to coordinate response across government. • Inspired by models in Sweden, Ukraine and France • Aim: faster detection, attribution and response= a shift toward a “fusion centre” model for the information space. 2. Significantly increase funding for information defence Focus areas: FCDO Hybrid Threats Directorate & BBC World Service as key strategic asset ▶️ greater investment needed to prevent authoritarian narratives from gaining ground globally. ▶️ recognition of information power as a hard power multiplier. 3. Scale up support to allies and vulnerable regions 🔹Priority regions: Black Sea, Western Balkans, Africa 🔹Focus on strengthening independent media and civil society resilience. 4. Address legal gaps on foreign interference 🔹current threshold to prove foreign attribution deemed too high, limiting enforcement & enabling plausible deniability 🔹 urgent legislative review needed to enable faster and more effective action. 5. Introduce algorithmic transparency for platforms 🔹Amend the Online Safety Act to require: • Greater transparency on how algorithms amplify content • Stronger safeguards against coordinated manipulation 6. Invest in public resilience (media literacy and prebunking) Shift from reactive debunking to societal immunity, to strengthen public understanding & scale preventive approaches. Considering citizens as the frontline of defence. 7. Rethink strategic communications Current efforts deemed to lack compelling narratives: Need credible messengers & Content tailored to target audiences 🔹 shift from fact-based rebuttal to competitive narrative warfare. 8. Clarify and communicate strategy on China 🇨🇳 🔹Defining red lines, Influence risks, Engagement doctrine, Avoid ambiguity between economic engagement and security priorities.

  • View profile for Scott Kelly

    Systems Thinker | Data Executive | Team Builder | Predictive Insights Leader | Board Advisor | Risk Modeller

    23,193 followers

    A new analysis in Nature Climate Change dissects the anatomy of why people deny climate change is happening, or that we should do anything about it. The conclusion: It's not about a lack of information. The authors argue that denial is not driven by ignorance or lack of information, but by six hardwired psychological mechanisms: 1. psychological distance 2. availability bias 3. cognitive dissonance 4. confirmation bias 5. loss aversion, and 6. existential anxiety. 7. Social identity It is not that people do not see the facts. It is that they cannot afford to accept them. 𝗧𝗵𝗲 𝗸𝗲𝘆 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀: 🔸 Denial is a shield. Rejecting climate science is a rational defense against anxiety and the fear of economic loss. When leaders frame climate action as a “job killer,” denial becomes a mechanism for protecting one’s livelihood and identity. 🔸 The Populist Trap. Politicians like Donald Trump and Scott Morrison have successfully weaponised these psychological biases, reframing environmental regulation as an elite attack on national sovereignty and working-class dignity. 🔸 Identity beats data. Because denial is rooted in group identity, facts from “outsiders” only reinforce resistance. The only effective counter-measures are trusted messengers (e.g., conservative leaders) and local framing, not more scientific charts. 𝗠𝘆 𝗧𝗮𝗸𝗲 We have spent decades trying to dismantle evolutionary psychology with logic. This paper shows it does not work. The gap between belief and action is not an information problem; it is an incentive problem embedded in institutions and economics. If the psychological barriers to belief are this high, we should stop spending capital trying to scale them. Converting denialists entrenched in identity politics is not a strategy that is worth continuing—it is a distraction. We do not need deniers to believe in climate science. We need them to buy the heat pump because it is cheaper, drive the EV because it is more efficient and make sustainable choices because they are better. When the profitable choice is the low-carbon choice, ideology collapses. Bypass the psychology. Fix the economics. Source: https://lnkd.in/eufTzdij #ClimateRisk #BehavioralEconomics #EnergyTransition #ClimatePolicy #Psychology #NatureClimateChange ___________ 𝘍𝘰𝘭𝘭𝘰𝘸 𝘮𝘦 𝘰𝘯 𝘓𝘪𝘯𝘬𝘦𝘥𝘐𝘯: Scott Kelly

  • View profile for Aleksandra Kuzmanovic
    Aleksandra Kuzmanovic Aleksandra Kuzmanovic is an Influencer

    Leadership Social Media Manager @WHO | Social Media Strategy | Digital Diplomacy

    10,773 followers

    Five years ago today, WHO held one of the most important press conferences when Dr Tedros declared #COVID19 a public health emergency of international concern — a moment that signaled to the world that we were facing a new global health crisis. It turned out to be unlike any other.   But while scientists, health workers and governments rushed to respond to the new virus, another battle was unfolding in real-time: the fight against health misinformation.   The phenomena of health misinformation wasn’t new, but this was the first pandemic of the digital age. Suddenly, false claims spread faster than the virus itself, reaching millions before experts could correct them. Fear and confusion filled the gaps where reliable information was missing. The stakes couldn’t have been higher.   Dr Maria Van Kerkhove and I reflected recently on what WHO has been doing to prevent false health information from spreading on #SocialMedia:   1. Engaging directly with the public — through #AskWHO live Q&A sessions, press conferences, we have answered real questions in real time.   2. Working with trusted messengers — from frontline health workers and scientists to religious leaders and digital influencers, so that people could hear accurate information from voices they already relied on.   3. Partnering with tech platforms — to ensure credible health information reached more people, while slowing the spread of harmful falsehoods.   4. Expanding access to information in multiple languages — so no one was left behind in accessing clear, verified health guidance.   5. Investing in research and digital innovation — to better understand the ways in which people consume digital content the best and adapt our strategies in real-time.   What we’ve learned about trust:   - Trust isn’t built in a crisis — it must be nurtured before, during, and after emergencies.   - People trust people — authentic, relatable messengers make the biggest impact.   - Transparency matters — being open about what we know, what we don’t, and how we’re learning builds credibility.   One thing is clear: the fight against misinformation is not over. Building and maintaining trust in public health is an ongoing effort — one that requires the commitment of governments, civil society, media, and the industry every single day.   Because trust isn’t a given, it’s earned.

  • View profile for Chinasa T. Okolo, Ph.D.

    Researcher, strategist, policy advisor on AI governance & safety for the Global Majority • TIME 100 AI • Forbes U30 AI

    17,236 followers

    My latest paper, “African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda,” is now available on ResearchGate: https://lnkd.in/ed3Nkab4! Last week, I shared this paper at the “Building a Just AI Ecosystem in Africa” Conference hosted by Research ICT Africa and at the “AI, Elections, and the Future of Democracy and Leadership: Global Experiences and Directions” Conference hosted by the University of Johannesburg Department of Politics & International Relations. This work examines the risks associated with the spread of generative AI-driven disinformation within Africa, particularly in democratic processes. It explores several case studies of generative AI usage during African elections and coups over the past decade. I also highlight efforts from fact-checking organizations across the continent to counteract mis/disinformation spread through direct engagement with the general public, partnerships with social media companies, and media literacy training. Additionally, I explore efforts from large tech companies to identify, decelerate, and eradicate mis/disinformation and discuss the implications of incorporating AI within these processes. To conclude, this work outlines potential efforts to increase AI, digital, and media literacy within the general public, and regulatory measures African governments should consider to govern generative AI and other emerging technologies. Feel free to check it out! #GenerativeAI #Elections #AfricanDevelopment #ArtificialIntelligence #Research

Explore categories