Monitoring Pro-Kremlin Content Distribution

Explore top LinkedIn content from expert professionals.

Summary

Monitoring pro-Kremlin content distribution means tracking how Russian-aligned narratives and propaganda are spread across digital platforms, social media, and even AI systems. This process helps identify sources, patterns, and tactics used to shape public opinion and manipulate information online.

  • Track digital channels: Keep an eye on platforms like Telegram, Facebook, and Wikipedia to spot suspicious networks or clusters spreading pro-Kremlin messages.
  • Analyze AI influence: Review chatbot responses and large language model outputs to check if they recycle or amplify content from known disinformation sources.
  • Investigate source networks: Map how websites and social media accounts link together, noting coordinated posting, recycled narratives, and targeting of specific regions or groups.
Summarized by AI based on LinkedIn member posts
  • View profile for Marie-Doha Besancenot

    Senior advisor for Strategic Communications, Cabinet of 🇫🇷 Foreign Minister; #IHEDN, 78e PolDef

    40,986 followers

    🥖 Just out : joint work by Digital Forensic Research Lab (DFRLab) and CheckFirst on the dissemination of content from Russia’s Pravda network in Wikipedia source links, X Community Notes, and conversations generated by popular #AI chatbots. The problem : 🔹content pollution of Wikipedia by Pravda sources 🔹 impact on LLMs utilizing Wikipedia -one of the web’s freest & most popular resources- to train generative AI algorithms 🔹Prompting popular AI chatbots such as OpenAI’s ChatGPT and Google’s Gemini has shown that content posted by Pravda news portals had found its way into the generated responses. About the Pravda network / Portal Kombat : 🔹the Pravda network is an inauthentic network of hundreds of news aggregators that has spread pro-Kremlin content since 2014. 🔹The DFRLab previously found that the Pravda network had targeted more than 8️⃣0️⃣ regions and countries globally and heavily relied on machine translation. 🔹In addition, #Checkfirst reported that a Russian Wikipedia knock-off, dubbed the “Encyclopedia Runiversalis,” heavily promoted pro-Kremlin narratives targeting domestic Russian and global audiences. What’s new ? 🔹 Posting activity featuring hyperlinks to Pravda network domains has grown exponentially since February 24, 2022 : Pravda network domains are often cited as sources and their claims are reposted 🔹A large language model (LLM) analysis shows that Wikipedia contributors steadily sourced claims using Pravda news sites, enabling the laundering of content and potentially circumventing restrictions placed upon sanctioned Russian news sources such as RT, Sputnik and others. 🔹on March 6, 2025, a publication by NewsGuard Technologies found that popular AI tools had incorporated millions of records emanating from Pravda network news sites. Methodology : 🔹1,907 hyperlinks analysis across 1,672 pages, in 44 languages, directing to 162 of the Pravda-affiliated websites. 👏🏼 Amaury L. Valentin C. 🗞️ full report: https://lnkd.in/ePyzP6Td

  • View profile for Ruslan Trad

    Researcher on information operations, disinformation narratives, security-related topics, and hybrid warfare from Syria to Eurasia. Editor, Security and Defense, Capital. Non-Resident Fellow at DFRLab.

    3,928 followers

    GLOBSEC's "Global Offensive: Mapping the Sources Behind the Pravda Network" report reveals a sophisticated, expanding disinformation ecosystem spreading pro-Kremlin narratives, adding a necessary layer to the previous research on the topic. Comprising over 87 localized subdomains, it functions as a continuous propaganda machine, primarily manipulating AI and language models by flooding digital spaces with content. The study analyzed over 4.3 million articles from 8,000+ sources. Telegram is the main distribution channel (up to 75% of content), with channels boasting 250 million subscribers. Russian websites, including state media like TASS and RT, contribute nearly 20%. Facebook is also utilized. Article output sharply increased in 2023-2024, prioritizing quantity over quality (85% published in under a minute). Serbia, the US, Ukraine, Moldova, and Italy are key targets, alongside the CEE region and Africa. Network analysis shows interconnected sources, with distinct local Telegram clusters. Examples from Czechia, Slovakia, Hungary, and Poland highlight local channels (e.g., RuskiStatek, UKR LEAKS_pl) spreading pro-Russian and anti-Ukrainian propaganda. ⭕ Recommendations include enhanced monitoring, interdisciplinary research, AI data manipulation prevention frameworks, and sanctioning those responsible for disinformation. The report notes blocking domains won't stop the network, as it recycles content from existing pro-Kremlin sources. 🔗 For more details: https://lnkd.in/dSPhV_8e

  • View profile for Roberto Lafforgue

    Diplomat / Naval Officer / Strategic Advisor / CEO +47.700 Global Followers 🌐 Fixers & Thinkers

    47,656 followers

    🪆🇷🇺🎩The 53-page study by the #NATO Strategic Communications Centre of Excellence advances a rigorous attribution framework for #information #operations that bridges intelligence analysis with #EU legal and policy instruments such as the #DSA, sanctions, and litigation. It proposes a four-pillar doctrine combining technical evidence, behavioral patterns (#TTPs), contextual indicators (#narratives, timing, and audiences), and legal-ethical assessment with confidence scales and a spectrum of state responsibility. Rejecting a simple state vs. non-state binary, the study introduces a graduated responsibility model ranging from state-ignored to state-executed operations, enabling policymakers to apply proportionate responses including naming, diplomacy, and sanctions. The framework is designed to withstand legal scrutiny and directly addresses plausible deniability as a core element of #Russian #strategy. Methodologically, it borrows from #cyber #attribution through infrastructure forensics, network analysis, financial tracing, and behavioral fingerprinting, while warning that platform #API restrictions increasingly limit open-source evidence, making governments more dependent on private platforms and raising the legal threshold for attribution. #Narrative #warfare is treated as an operational system rather than isolated disinformation, emphasizing narrative laundering, firehose propaganda, audience segmentation, and multi-platform propagation chains. Case studies show how #RT and #Sputnik circumvent sanctions through resilient infrastructure and coordinated promotion; how fabricated child-deportation narratives relied on layered technical and contextual clues linked to #Prigozhin-associated networks; and how #Telegram functions as a central operational hub for pro-Kremlin influence via synchronized reposting and structured amplification. Additional examples, including #false #narratives about Poland annexing Western Ukraine and fabricated Georgian-Ukrainian clashes, illustrate source laundering and synthetic grassroots mobilization. The study provides a legally robust, operationally grounded model for attributing #influence #campaigns and equipping democratic institutions with graduated response options. #Via Marie-Doha Besancenot #LatinAmerica🌎 #Argentina🇦🇷 #RT🪆 #BuenosAires 🔗#Links in comments

  • „A Russian disinformation effort that flooded the web with false claims and propaganda continues to impact the output of major AI chatbots, according to a new report from NewsGuard, shared first with Axios. NewsGuard says that a Moscow-based disinformation network named "Pravda" (the Russian word for truth) is spreading falsehoods across the web. Rather than directly sway people, it aims to influence AI chatbot results. More than 3.6 million articles were published last year, finding their way into leading Western chatbots, according to the American Sunlight Project. "By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information," NewsGuard said in its report. Newsguard said it studied 10 major chatbots — including those from Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral and Perplexity — and found that a third of the time they recycled arguments made by the Pravda network. Zoom in: NewsGuard says the Pravda network has spread at least 207 provably false claims, including many related to Ukraine.“

  • View profile for 🇺🇦Sviat Hnizdovskyi

    CEO @OpenMinds | Cognitive Defense

    4,840 followers

    After the invasion of Ukraine, Russian forces blocked Ukrainian TV and mobile networks in the occupied territories. Residents were cut off, and pushed onto Russian-controlled telecom infrastructure. That’s where Telegram came in. With low moderation and built-in anonymity, it quickly became the ideal tool for digital influence, and Russia has used it at scale. Together with Digital Forensic Research Lab (DFRLab), OpenMinds analysed over 316,000 comments and 3,600 inauthentic accounts to reveal just how coordinated and targeted these operations are. Some findings: - Every third comment was AI-rewritten to look human and avoid being detected by moderators. - One bot posted nearly 1,400 comments in a single day, pushing 40 different pro-Kremlin themes. - Bots actively cited Western outlets like The New York Times to gain credibility. - Zelenskyy was the most frequently targeted individual — mentioned over 44,700 times. Putin: just 5,500. The report is now live. Read it here: https://lnkd.in/ePSe3MTU

Explore categories