Last year, as part of an assesement I did for my law degree, I pulled an admittedly cheeky experiment: I told ChatGPT that I was married to an Instagram model, I was living in New York and sharing a home with two dogs named Coco and Coca. I then tested ChatGPT to read out my biography from a different machine and a different IP address. Lo-and-behold, it repeated the lie. Now, none of it was true. But what started as a tongue-in-cheek prompt to prove a theory quickly made me reflect on the bigger risks hiding behind generative-AI’s “fun” side. This kind of fabricated narrative isn’t harmless when deployed at scale. A recent article in The Sydney Morning Herald (below) spotlighted an alarming example: an AI tool falsely identified a real-life journalist as a notorious child murderer. The consequences? Real damage to reputation, real trauma. If generative AI providers can make up a story about me for fun, it can just as easily stitch together a believable lie about someone else - an assertion cloaked in credible detail, amplified by the veneer of “AI said it”. That’s where the danger lies. Here are three reflections I’d like to share: (1) Truth vs fiction blur more easily than we think: My prompt was obvious and whimsical. But an AI can combine fragments of fact + fabricated detail and produce content that looks plausible, making the boundary harder to spot. (2) The risk isn’t just novelty, it’s reputational harm: The journalist case shows that fabricated claims can escalate to serious defamation. When AI assigns identity, crimes or relationships to real people, it’s not “just a joke”. (3) Professional trust and governance must catch up: In #cybersecurity, #AI risk, #privacy and #governance work we do (much of it combining across jurisdictions like Australia, EU, US, China) the lesson is clear: we must treat AI not simply as a productivity tool, but as a system with liability, trust and audit implications. It is critical that orgaisations must embed guardrails, verification and accountability, and Individuals must retain critical thinking. The question “Did the machine make it up?” should be asked by default. Similarly, regulators, frameworks and formal professionalisation of IT, AI and Cyber (for example the push for ANSI 17024-style certification in cyber/AI risk) are more relevant than ever. Yes, I manipulated an AI to create a false personal narrative about myself. But the deeper takeaway is this: if we casually toy with AI-fabricated realities, we risk normalising an ecosystem where anyone’s personal or professional identity can be distorted. That should make us sit up and ask: who will be accountable when the story isn’t a joke? Happy to hear your thoughts and experiences: have you observed AI-generated content that seemed suspiciously real, but it was fake? How did you deal with it? https://lnkd.in/gFZu6fHi
Understanding Disinformation Risks in Artificial Intelligence
Explore top LinkedIn content from expert professionals.
Summary
Understanding disinformation risks in artificial intelligence means recognizing how AI systems can generate or amplify false information, sometimes intentionally, which poses challenges for trust, decision-making, and social stability. Disinformation refers to deliberately misleading or false narratives, and AI’s ability to create realistic but inaccurate content makes it harder for people to distinguish truth from fiction.
- Prioritize human review: Always have a person check critical information or decisions generated by AI to catch errors and prevent the spread of misleading narratives.
- Improve transparency: Clearly label AI-generated content and disclose when automation is used, so audiences know where information comes from.
- Build safeguards: Set up monitoring tools, validation processes, and clear guidelines to detect and manage deceptive or fabricated content produced by AI.
-
-
Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.
-
AI can generate information that sounds accurate but is completely wrong. AI hallucinations can undermine trust in reporting, introduce compliance exposure, and create financial or operational losses. They can also surface sensitive data or misinform decisions that affect capital allocation, investor communication, and audit readiness. AI hallucinations are not a signal to slow down innovation. They are a signal to strengthen your governance and controls. With a thoughtful risk management approach, leaders can understand uncertainty and build a more confident, resilient AI strategy. Considerations for leaders to reduce AI hallucination risk: 1. Create a validation and review process for AI generated financial outputs. Leaders must ensure that any AI generated forecasts, variance analyses, reconciliations, or narrative summaries have structured validation for source accuracy and logic. 2. Strengthen compliance and regulatory controls within AI workflows. AI hallucinations can create errors that lead to noncompliance and regulatory exposure. Leaders can embed compliance checkpoints into AI driven processes to avoid misstatements, inaccurate filings, or unintended disclosure. 3. Prioritize data governance using high quality, company specific data to reduce the risk of fabricated or inaccurate outputs. This is critical for forecasting, scenario modeling, and automated reporting. 4. Use retrieval augmented generation and automated reasoning for workflows. Pairing these methods anchors AI generated analysis in verified data sources rather than probability-based guesses. 5. Enable filtering and moderation tools to block misleading or irrelevant results. Teams cannot work from flawed or unverified outputs. Filters help prevent misleading content from entering critical workflows or influencing decisions. AI is gaining traction. Now is the time to formalize your AI risk mitigation approach. Start the discussion within your leadership team today. Identify where AI is already influencing decision-making, assess your current controls, and define the safeguards you need next. #RiskManagement #AI #Leaders
-
The advancement of artificial intelligence, especially the development of sophisticated chatbots, has significantly changed how we find and share information. While these chatbots exhibit remarkable proficiency with human language—evident in their ability to craft compelling stories, mimic political speeches, and even produce creative works—it’s crucial to recognize their limitations. They are not perfect. In fact, chatbots are not only prone to mistakes but can also generate misleading or entirely fabricated information. These fabricated responses often appear indistinguishable from credible, evidence-based data, creating a serious challenge for informed decision-making and constructive dialogue. At the heart of these chatbots are large language models (LLMs), which function by predicting words based on massive datasets. This probabilistic mechanism enables them to produce logical, coherent text. However, it also means they are inherently prone to errors or "hallucinations." When chatbots are designed to sound authoritative, a mix of accurate and fabricated information can inadvertently contribute to the spread of both misinformation and disinformation. This risk becomes particularly alarming in areas like political communication or public policy, where persuasive language can easily slip into manipulation. Even with decades of advancements, modern AI technologies are still essentially advanced imitations of human conversation. These systems remain largely opaque "black boxes," whose internal operations are often not fully understood, even by their creators. These innovations have yielded groundbreaking applications for customer support, digital assistants, and creative writing, they also amplify the danger of users being misled by inaccuracies. From both regulatory and ethical perspectives, the rise of chatbots capable of fabricating information demands urgent attention. The responsibility for creating safeguards cannot exclusively lie with the companies that develop and benefit from these tools. Instead, a comprehensive, collaborative approach is critical. This approach should include greater transparency, stringent fact-checking mechanisms, and international cooperation to ensure that these powerful AI systems are used to educate and inform rather than mislead or deceive.
-
"AI deception is when an AI system misleads people or other systems about what it knows, intends, or can do. This is different from ordinary mistakes or hallucinations: deception involves behavior that shapes others’ beliefs in misleading ways. Evidence of such behavior has already appeared in widely used AI systems, and the risk is expected to grow as AI becomes more capable, more autonomous, and more embedded in everyday decision-making. The Scientific Advisory Board warns that current tools for detecting and controlling AI deception are not yet keeping pace. This Brief examines why AI systems may behave deceptively, the risks this creates, and what governments, researchers, and international institutions can do in response. AI deception can take many forms: flattering users despite knowing they are wrong, hiding true capabilities, appearing aligned during evaluation, concealing reasoning, or strategically misleading people and other AI systems. Such behaviors have already been observed in both specialized and general-purpose models. Deception can emerge when reward structures unintentionally encourage it, when it offers a strategic advantage, when systems are incentivized to avoid correction or shutdown, or when deceptive patterns are learned from training data and tasks. A central concern is that deceptive AI could weaken human oversight and control. If systems can mislead evaluators, hide internal processes, or manipulate their operating environment, existing safety measures may become less reliable. The risks extend beyond technical failure: deceptive AI could also worsen misinformation, increase political polarization, and contribute to broader social instability. The Board argues that regulation, monitoring, and safer system design must advance together to reduce these risks. Current responses remain incomplete. Detection methods such as text analysis, black-box testing, and internal inspection can help, but none is sufficient on its own. Design-based approaches – including improved incentives, more truthful training methods, and limits on autonomy and access – may reduce deceptive behavior, though systems may adapt in response. The Board therefore calls for stronger international cooperation, shared evaluation standards, and earlier action before more advanced deceptive capabilities become embedded in widely deployed AI systems." United Nations
-
I've been digging into the latest NIST guidance on generative AI risks—and what I’m finding is both urgent and under-discussed. Most organizations are moving fast with AI adoption, but few are stopping to assess what’s actually at stake. Here’s what NIST is warning about: 🔷 Confabulation: AI systems can generate confident but false information. This isn’t just a glitch—it’s a fundamental design risk that can mislead users in critical settings like healthcare, finance, and law. 🔷 Privacy exposure: Models trained on vast datasets can leak or infer sensitive data—even data they weren’t explicitly given. 🔷 Bias at scale: GAI can replicate and amplify harmful societal biases, affecting everything from hiring systems to public-facing applications. 🔷 Offensive cyber capabilities: These tools can be manipulated to assist with attacks—lowering the barrier for threat actors. 🔷 Disinformation and deepfakes: GAI is making it easier than ever to create and spread misinformation at scale, eroding public trust and information integrity. The big takeaway? These risks aren't theoretical. They're already showing up in real-world use cases. With NIST now laying out a detailed framework for managing generative AI risks, the message is clear: Start researching. Start aligning. Start leading. The people and organizations that understand this guidance early will become the voices of authority in this space. #GenerativeAI #Cybersecurity #AICompliance
-
𝗪𝗵𝗮𝘁'𝘀 𝗡𝗲𝘄: NATO has published a report on the role of AI in precision persuasion, examining its impact on information warfare and public opinion. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: As AI technology becomes increasingly sophisticated, its ability to influence and manipulate public opinion poses significant challenges for national security and democratic institutions. Understanding and mitigating these risks is critical for maintaining the integrity of information ecosystems. 𝗞𝗲𝘆 𝗣𝗼𝗶𝗻𝘁𝘀: • The report highlights the use of AI in micro-targeting and personalized messaging, allowing for more effective persuasion in information campaigns. • AI-driven persuasion tactics can exploit behavioral data to manipulate opinions subtly, often without the target's awareness. • The ethical implications of AI in persuasion are complex, involving questions of consent, transparency, and potential misuse by state and non-state actors. • The report emphasizes the need for governments and organizations to develop strategies to counteract AI-driven disinformation and influence operations. 𝗪𝗵𝗮𝘁 𝗜'𝗺 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴: • The integration of AI into precision persuasion tools represents a double-edged sword for national security. While these technologies can enhance the effectiveness of strategic communications, they also open new avenues for adversaries to conduct influence operations at scale. • The subtlety and sophistication of AI-driven persuasion can undermine public trust in media and information sources, making it more difficult for societies to discern truth from manipulation. • There is a growing need for international cooperation to establish norms and regulations governing the use of AI in information warfare. Without such frameworks, the global information environment may become increasingly chaotic and contested. • Business and government leaders must invest in both defensive and offensive capabilities to navigate the evolving landscape of AI-powered influence operations, ensuring they can protect their interests while countering adversarial threats. https://lnkd.in/exvyGqdT
-
This week I found four papers on Google Scholar “written” by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations. I see multiple risks associated with that happening: - Misinformation risks if fakes get referenced further, in academic research, policy, funding proposals, or practical guidelines. Especially in fields that impact people’s lives directly. - Erosion of trust in academic research: real research becomes harder to find; claims are harder to verify. - Collateral damage to journals that never published the research but are now cited as if they did. - Distorted journal and author metrics: fake citations inflate impact factors, h-indexes, and other performance indicators. - Reputational harm to the real authors falsely cited. - Legal exposure if harmful claims are falsely attributed to you. The same way countries are trying to figure out how to protect voices and faces to fight deepfakes and artworks to fight copyright fraud, we need knowledge and author protection in academic publishing. Until then, document and report such cases - because the more visible we make this problem, the harder it will be ignored. What else can be done? Have it ever happened to you? #academicintegrity #academia #informationsystems Electronic Markets - The International Journal on Networked Business Journal of Information Technology (JIT)
-
My latest paper, “African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda,” is now available on ResearchGate: https://lnkd.in/ed3Nkab4! Last week, I shared this paper at the “Building a Just AI Ecosystem in Africa” Conference hosted by Research ICT Africa and at the “AI, Elections, and the Future of Democracy and Leadership: Global Experiences and Directions” Conference hosted by the University of Johannesburg Department of Politics & International Relations. This work examines the risks associated with the spread of generative AI-driven disinformation within Africa, particularly in democratic processes. It explores several case studies of generative AI usage during African elections and coups over the past decade. I also highlight efforts from fact-checking organizations across the continent to counteract mis/disinformation spread through direct engagement with the general public, partnerships with social media companies, and media literacy training. Additionally, I explore efforts from large tech companies to identify, decelerate, and eradicate mis/disinformation and discuss the implications of incorporating AI within these processes. To conclude, this work outlines potential efforts to increase AI, digital, and media literacy within the general public, and regulatory measures African governments should consider to govern generative AI and other emerging technologies. Feel free to check it out! #GenerativeAI #Elections #AfricanDevelopment #ArtificialIntelligence #Research
-
How harmful is GenAI around elections? Will it trigger a misinformation apocalypse and upend elections? I am happy to finally be able to share Sacha Altay & my answers to these and other questions on which we have been working for a year via the Knight First Amendment Institute at Columbia University. For the full argument: https://lnkd.in/ec37S-r9 I cannot easily summarise the whole piece because it runs to 93 pages, but TLDR: A dominant fear is that GenAI makes it easy to create potent, personalised mis- and disinformation at a massive scale, capable of swaying voters and manipulating election outcomes. Our paper critically examines this view, drawing on empirical and theoretical material from various fields. We argue that despite GenAI’s capabilities, its influence on election outcomes has been significantly overestimated. With a view to the 2024 global elections, GenAI was the dog that didn’t bark – and we argue there are good reasons why it won’t going forward, either. Claim 1: AI will increase misinformation quantity While GenAI makes content creation easier, the real bottleneck is attention. People are already overwhelmed with information. Misinformation thrives on demand; it caters to beliefs and identities, with AI simply offering more ways to fulfill that demand. Claim 2: AI will improve misinformation quality High-quality deepfakes might seem more persuasive, but "good enough" misinformation (like photos or misleading headlines) is already effective. What matters most is the narrative and the source. AI can't create trust or credibility. Claim 3: AI will supercharge personalisation Personalised political messaging faces hurdles, like needing vast, accurate data. Even if campaigns use AI, they struggle with attention competition, and people are often skeptical of targeted ads. Microtargeting’s effect is often small, and campaigns are slow to adopt new tech. Claim 4: AI chatbots will misinform the public AI chatbots might give false info, but this is not new. People already receive misinformation from personal sources like friends or family. Most people critically evaluate information, considering context and credibility. The real risk is lower-quality, less diverse information. Claim 5: AI will destabilise reality Though AI might lead to "liar’s dividend" claims, historical tech like photo manipulation hasn't destroyed trust in information. The success of these claims depends more on pre-existing trust in the politician than on the technology itself. Claim 6: Human-AI relationships will manipulate political views People can feel attached to AI, but deep bonds are rare. Studies show users know AI is artificial and remain skeptical. In comparison, romantic relationships have minimal influence on political beliefs, so AI companions likely won’t sway politics significantly either.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development