Article from NY Times: More than two years after ChatGPT's introduction, organizations and individuals are using AI systems for an increasingly wide range of tasks. However, ensuring these systems provide accurate information remains an unsolved challenge. Surprisingly, the newest and most powerful "reasoning systems" from companies like OpenAI, Google, and Chinese startup DeepSeek are generating more errors rather than fewer. While their mathematical abilities have improved, their factual reliability has declined, with hallucination rates higher in certain tests. The root of this problem lies in how modern AI systems function. They learn by analyzing enormous amounts of digital data and use mathematical probabilities to predict the best response, rather than following strict human-defined rules about truth. As Amr Awadallah, CEO of Vectara and former Google executive, explained: "Despite our best efforts, they will always hallucinate. That will never go away." This persistent limitation raises concerns about reliability as these systems become increasingly integrated into business operations and everyday tasks. 6 Practical Tips for Ensuring AI Accuracy 1) Always cross-check every key fact, name, number, quote, and date from AI-generated content against multiple reliable sources before accepting it as true. 2) Be skeptical of implausible claims and consider switching tools if an AI consistently produces outlandish or suspicious information. 3) Use specialized fact-checking tools to efficiently verify claims without having to conduct extensive research yourself. 4) Consult subject matter experts for specialized topics where AI may lack nuanced understanding, especially in fields like medicine, law, or engineering. 5) Remember that AI tools cannot really distinguish truth from fiction and rely on training data that may be outdated or contain inaccuracies. 6)Always perform a final human review of AI-generated content to catch spelling errors, confusing wording, and any remaining factual inaccuracies. https://lnkd.in/gqrXWtQZ
Tips for Reducing AI Misinformation
Explore top LinkedIn content from expert professionals.
Summary
AI misinformation refers to false or misleading information generated by artificial intelligence systems, which can appear convincing but is often inaccurate, fabricated, or lacking reliable sources. As AI becomes more integrated into daily tasks and business operations, it’s crucial to be aware of its limitations and apply thoughtful strategies to guard against errors that could impact decision-making, credibility, and trust.
- Always cross-check: Verify AI-generated facts, figures, and claims against established, trustworthy sources before sharing or acting on them.
- Demand transparency: Require clear evidence, credible references, and acknowledgment of uncertainty from AI systems, especially when evaluating important recommendations.
- Build human accountability: Assign responsibility for reviewing and validating AI outputs within your team, ensuring no critical decisions rely solely on automated answers.
-
-
AI can generate information that sounds accurate but is completely wrong. AI hallucinations can undermine trust in reporting, introduce compliance exposure, and create financial or operational losses. They can also surface sensitive data or misinform decisions that affect capital allocation, investor communication, and audit readiness. AI hallucinations are not a signal to slow down innovation. They are a signal to strengthen your governance and controls. With a thoughtful risk management approach, leaders can understand uncertainty and build a more confident, resilient AI strategy. Considerations for leaders to reduce AI hallucination risk: 1. Create a validation and review process for AI generated financial outputs. Leaders must ensure that any AI generated forecasts, variance analyses, reconciliations, or narrative summaries have structured validation for source accuracy and logic. 2. Strengthen compliance and regulatory controls within AI workflows. AI hallucinations can create errors that lead to noncompliance and regulatory exposure. Leaders can embed compliance checkpoints into AI driven processes to avoid misstatements, inaccurate filings, or unintended disclosure. 3. Prioritize data governance using high quality, company specific data to reduce the risk of fabricated or inaccurate outputs. This is critical for forecasting, scenario modeling, and automated reporting. 4. Use retrieval augmented generation and automated reasoning for workflows. Pairing these methods anchors AI generated analysis in verified data sources rather than probability-based guesses. 5. Enable filtering and moderation tools to block misleading or irrelevant results. Teams cannot work from flawed or unverified outputs. Filters help prevent misleading content from entering critical workflows or influencing decisions. AI is gaining traction. Now is the time to formalize your AI risk mitigation approach. Start the discussion within your leadership team today. Identify where AI is already influencing decision-making, assess your current controls, and define the safeguards you need next. #RiskManagement #AI #Leaders
-
Gemini just exposed a ChatGPT "cover-up." Except it didn't. r/OpenAI is flooded with "ChatGPT lied, Gemini revealed the truth" posts. Same pattern. Different topics. Zero sources. It's astroturfing meets AI hallucination. Here's how to spot AI-generated misinformation: 𝟭. 𝗖𝗵𝗲𝗰𝗸 𝗳𝗼𝗿 "𝗝𝗼𝘂𝗿𝗻𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀" LLMs generate: • Fictional timelines • Fake corporate events • "Insider leaks" that never happened Reads like journalism. Completely made up. 𝟮. 𝗗𝗲𝗺𝗮𝗻𝗱 𝗦𝗼𝘂𝗿𝗰𝗲𝘀, 𝗡𝗼𝘁 𝗦𝗰𝗿𝗲𝗲𝗻𝘀𝗵𝗼𝘁𝘀 Screenshots hide context. Real claims have Reuters, Bloomberg, official statements. No link = No trust. 𝟯. 𝗖𝗿𝗼𝘀𝘀-𝗩𝗲𝗿𝗶𝗳𝘆 𝗔𝗰𝗿𝗼𝘀𝘀 𝗠𝗼𝗱𝗲𝗹𝘀 Ask ChatGPT, Claude, and Gemini the same question. Three different answers? Hallucination. Same answer with sources? Probably real. 𝟰. 𝗦𝗽𝗼𝘁 "𝗦𝗲𝗰𝗿𝗲𝘁 𝗗𝗲𝗮𝗹" 𝗥𝗲𝗱 𝗙𝗹𝗮𝗴𝘀 LLMs love generating: → Confidential agreements → Behind-the-scenes negotiations → Cover-ups Sounds like conspiracy? It is. 𝟱. 𝗧𝗿𝗮𝗰𝗲 𝘁𝗵𝗲 𝗢𝗿𝗶𝗴𝗶𝗻 Find the first post. Check if multiple accounts say the exact same thing. Organic discoveries don't happen everywhere at once. 𝟲. 𝗔𝘀𝗸 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 Don't: "What happened with OpenAI?" Do: "What does Reuters report about RAM shortages?" Specific questions expose hallucinations. 𝟳. 𝗥𝗲𝗰𝗼𝗴𝗻𝗶𝘇𝗲 𝗘𝗺𝗼𝘁𝗶𝗼𝗻𝗮𝗹 𝗧𝗿𝗶𝗴𝗴𝗲𝗿𝘀 "Gaslighting," "lying," "exposed" = manipulation. Facts don't need emotional framing. 𝟴. 𝗨𝘀𝗲 𝘁𝗵𝗲 𝟮𝟰-𝗛𝗼𝘂𝗿 𝗥𝘂𝗹𝗲 Breaking AI drama? Wait a day. Real stories get confirmed. Fake ones get debunked. 𝟵. 𝗕𝘂𝗶𝗹𝗱 𝗬𝗼𝘂𝗿 𝗧𝗿𝘂𝘀𝘁𝗲𝗱 𝗦𝗼𝘂𝗿𝗰𝗲𝘀 Industry reporters who fact-check. Technical blogs that cite sources. Official statements only. Check your list first. Not Reddit. 𝟭𝟬. 𝗔𝘀𝘀𝘂𝗺𝗲 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗙𝗶𝗿𝘀𝘁 If AI said it without sources, it's probably wrong. Especially if it's dramatic, specific about dates, or revealing "hidden" info. The bottom line: AI generates perfect misinformation. Your defense isn't better AI. It's better critical thinking. What AI misinformation have you spotted? Found this helpful? Follow Liam Lawson
-
The 4-step framework to stop AI hallucinations before they become business liabilities. In a recent Startup at School class at ORT Argentina, we used AI to analyze a business case of a potential startup. We asked Gemini to evaluate a growth strategy and support its recommendation with real-world examples. The response was well written, nicely structured, full of metrics and references to companies. Seemingly flawless. Then, a sharp 17-year-old student raised a critical question: "wait… does that company actually exist?" 🤔 We checked. It didn’t. Another example failed the same test. In a learning environment, this is a harmless lesson that became a great teaching moment: AI can hallucinate, and it does so very convincingly. In business, the consequences can be far more serious. Undetected AI hallucinations can lead to: - investment decisions based on false assumptions - strategies built on made-up examples - recommendations that go unchallenged simply out of trust in AI’s output And that’s the real risk: not the mistake itself, but the false sense of certainty. AI doesn’t "know." It predicts, filling gaps with what appears plausible. When context is weak or questions are poorly framed, the system proceeds with confidence. To mitigate these risks on teams using popular agents like ChatGPT, Copilot, or Gemini, I suggest a simple framework: 1. Demand sources, not just conclusions Never settle for a recommendation without asking for the source or the concrete data behind it. Don’t stop at the “what”, dig into the “why” and the “where from”. In business, evidence is your only real safety net. 2. Separate exploration from decision-making Use AI to spark ideas, but never delegate the final call. Validation and closure must remain human territory. The best leaders know: insight is automated, but accountability is not. 3. Force AI to declare uncertainty Require explicit identification of assumptions, information gaps, and low-confidence areas. If AI can’t justify a data point, it must say so. Incomplete certainty is a signal to dig deeper. 4. Assign human ownership and accountability Define exactly who validates each recommendation before implementation. Without clear ownership, hallucinations multiply and scale. In high-stakes environments, ambiguity is the enemy of progress. AI First demands human judgment and designing robust interactions, with clear guardrails and accountability at every step. In the classroom, this hallucination made us laugh. But in business, it’s a liability you want to spot early. How are you ensuring your teams detect and prevent the amplification of AI hallucinations? Free to share if you want to 😀 #AIHallucinations
-
Harsh truth: AI has opened up a Pandora's box of threats. The most concerning one? The ease with which AI can be used to create and spread misinformation. Deepfakes (AI-generated content that portrays something false as reality) are becoming increasingly sophisticated & challenging to detect. Take the attached video - a fake video of Morgan Freeman, which looks all too real. AI poses a huge risk to brands & individuals, as malicious actors could use deepfakes to: • Create false narratives about a company or its products • Impersonate executives or employees to damage credibility • Manipulate public perception through fake social media posts The implications for PR professionals are enormous. How can we maintain trust and credibility in a world where seeing is no longer believing? The answer lies in proactive preparation and swift response. Here are some key strategies for navigating the AI misinformation minefield: 🔹 1. Educate your team: Ensure everyone understands the threat of deepfakes and how to spot potential fakes. Regular training is essential. 🔹 2. Monitor vigilantly: Keep a close eye on your brand's online presence. Use AI-powered tools to detect anomalies and potential threats. 🔹 3. Have a crisis plan: Develop a clear protocol for responding to AI-generated misinformation. Speed is critical to contain the spread. 🔹 4. Emphasize transparency: Build trust with your audience by being open and honest. Admit mistakes and correct misinformation promptly. 🔹 5. Invest in verification: Partner with experts who can help authenticate content and separate fact from fiction. By staying informed, prepared, and proactive, PR professionals can navigate this new landscape and protect their brands' reputations. The key is to embrace AI as a tool while remaining vigilant against its potential misuse. With the right strategies in place, we can harness the power of AI to build stronger, more resilient brands in the face of the misinformation minefield.
-
Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.
-
We have to internalize the probabilistic nature of AI. There’s always a confidence threshold somewhere under the hood for every generated answer and it's important to know that AI doesn’t always have reasonable answers. In fact, occasional "off-the-rails" moments are part of the process. If you're an AI PM Builder (as per my 3 AI PM types framework from last week) - my advice: 1. Design for Uncertainty: ✨Human-in-the-loop systems: Incorporate human oversight and intervention where necessary, especially for critical decisions or sensitive tasks. ✨Error handling: Implement robust error handling mechanisms and fallback strategies to gracefully manage AI failures (and keep users happy). ✨User feedback: Provide users with clear feedback on the confidence level of AI outputs and allow them to provide feedback on errors or unexpected results. 2. Embrace an experimental culture & Iteration / Learning: ✨Continuous monitoring: Track the AI system's performance over time, identify areas for improvement, and retrain models as needed. ✨A/B testing: Experiment with different AI models and approaches to optimize accuracy and reliability. ✨Feedback loops: Encourage feedback from users and stakeholders to continuously refine the AI product and address its limitations. 3. Set Realistic Expectations: ✨Educate users: Clearly communicate the potential for AI errors and the inherent uncertainty involved about accuracy and reliability i.e. you may experience hallucinations.. ✨Transparency: Be upfront about the limitations of the system and even better, the confidence levels associated with its outputs.
-
Here's a counterintuitive AI tip that cuts error rates by 50%: 💡 Ask AI to explain its reasoning. I call it the "reasoning tax" because it costs you something: → 30-40% more tokens → Slower responses → Extra review time But for complex tasks, the research shows that error rates drop by half. My go-to prompt addition: "Before you start, show me your reasoning." Then I review the logic before it does the work. If something's off in the thinking, I catch it early. Another trick I use: "Show me the sources you used" after I get the results. This lets me verify quality and recency, which is especially important when I need an answer that pulls from the most current data. These aren't just prompts. They're systems. And that's the shift happening right now: we're moving from crafting better prompts to designing prompt SYSTEMS with chains, conditional logic, and validation. The small tax upfront saves massive correction costs later. #AIStrategy #PromptEngineering #QualityControl
-
AI hallucinations can break trust. Imagine asking a colleague for last quarter’s sales numbers, and they answer with total confidence…. but many numbers are wrong. When you ask them to check, they double down, “No no, it’s definitely right”, they say. You then point out each error, line by line. They’re surprised, they were sure it was right. That’s what it’s like when AI tools hallucinate. They sound convincing, but the facts don’t hold up. And unless you know the topic well, it can be really hard to spot the mistakes. There are now countless stories of lawyers getting in a pickle because they cited hallucinated cases in court. Those fake cases very likely looked plausible, with real names and logical dates, but alas they were made up nonsense. It’s easy to treat AI like a research tool or a smart search engine, not realising that hallucinations are a byproduct of how these systems generate outputs: they answer your questions based on patterns and probabilities. Not facts. It’s what makes AI excellent at creative writing tasks, or even rewording emails. It’s also what makes them less good at fact based outputs. This is where you need to tread carefully: if those hallucinations are seen by your clients or customers, you could rapidly erode trust in your brand, your team, and your business. So what are some ways to reduce the risk? It’s fairly simple, but requires some effort: → Properly check all outputs before they’re used or shared. → Set clear boundaries for what tasks AI can and can’t be used for. → Choose tools that make it easy to verify and cross-check answers. → Train staff so they are clear on the potential impact of hallucinations. Have you had AI double down on a hallucination? I'd love to know! ⚛️ I’m Sarah Mitchell, PhD, AIGP and founder of Anadyne IQ. I work with organisations to create clear policies, practical frameworks and training, and simple ways to mitigate AI risks.
-
It’s funny, but this harsh reality also highlights a serious truth: AI is powerful, but it’s not infallible. Algorithms can misinterpret context, miss nuance, or make mistakes that a human would never make. Blind trust can be dangerous, whether you’re eating a mushroom or making business decisions. So how can we question AI outputs and make better decisions? Here are a few strategies I use: Check the source – Where did the AI get its data? Is it reliable, up-to-date, and relevant to your situation? Cross-verify – Don’t take a single answer at face value. Look for supporting evidence or alternative perspectives. Consider context – AI can miss nuances that matter. Ask: “Does this recommendation make sense given my goals, constraints, and values?” Ask why, not just what – Probe AI suggestions: “Why is this solution recommended?” Understanding reasoning helps spot gaps. Add human oversight – Involve experts, mentors, or peers to validate outputs before acting. AI is a powerful partner, but decisions should still be human-led. Our judgment, skepticism, and experience are what turn insights into smart action. 💬 How do you validate AI recommendations in your work to avoid costly mistakes? #AI #CriticalThinking #Leadership #FutureOfWork #LearningAndDevelopment #TrustButVerify
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development