How to Ensure Deepfake Accountability

Explore top LinkedIn content from expert professionals.

Summary

Ensuring deepfake accountability means creating systems, rules, and practices that track, disclose, and limit the misuse of AI-generated content so people know what’s real and what’s fake. Deepfakes are synthetic images, audio, or videos that convincingly imitate real people or events, and can pose risks to trust, safety, and democracy unless transparency and traceability controls are in place.

  • Prioritize transparency: Clearly label and disclose AI-generated or manipulated content, making it easy for viewers to identify what is synthetic.
  • Strengthen detection tools: Invest in scanning and monitoring systems that can identify deepfakes and alert teams to manipulated media across platforms.
  • Establish rapid response: Develop quick escalation channels and protocols to address and remove harmful deepfake content before it spreads widely.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,869 followers

    I enjoyed being interviewed by Brian O'Donovan of RTÉ news for this story (link below) on that crazy deepfake that was released with just a couple of days left in the Irish Presidential election. The deepfake video of Catherine Connolly announcing her withdrawal from the presidential race illustrates how powerful and accessible generative AI has become. It convincingly replicated both Ms Connolly’s appearance and voice, wrapped within a fabricated RTÉ News broadcast. Under the EU’s Artificial Intelligence Act, this kind of material is no longer viewed as a novelty. It is a regulated phenomenon, formally recognised as a “deepfake”. Article 3(60) defines a deepfake as AI-generated or manipulated image, audio, or video content that resembles a real person, object, place, or event and would falsely appear authentic to the viewer. In essence, it is synthetic content that convincingly imitates reality. The AI Act deals with deepfakes under Article 50, which sets out transparency obligations for both developers and users of such systems. Providers of AI models that generate images, video, or audio must build in technical mechanisms, such as watermarks, cryptographic signatures, or metadata - that identify outputs as artificially generated. These identifiers must be robust, interoperable, and detectable in machine-readable form. But that doesn’t stop users from screenshotting and cropping - which removes these signatures. Deployers, meaning those who use these systems to publish or distribute content, have a separate duty to disclose when material has been created or manipulated by AI. The disclosure must be clear and easily recognisable. There are narrow exceptions, such as for artistic, satirical, or law enforcement purposes, but even then some form of notice is generally required. What strikes me as interesting is - could the person/country who generated the deepfake video claim it was satire? The Facebook page the video came from had a notice that it was AI generated (which went unnoticed by most). That’s troubling. Failure to label or mark synthetic content can, in certain circumstances, trigger the AI Act’s prohibition on manipulative practices under Article 5(1)(a). If the omission materially distorts a person’s behaviour or decision-making, particularly in a way that could cause significant harm, it may fall within the category of banned AI practices. The Connolly incident shows why these provisions matter. Without visible disclosure, a deepfake risks undermining electoral integrity, public confidence, and informed democratic participation. The EU’s framework does not outlaw deepfakes outright, but it insists on transparency and accountability in their creation and use. As models capable of hyper-realistic synthesis continue to evolve, the legal focus is shifting from censorship to traceability - ensuring that authenticity can be verified even when the human eye can no longer tell the difference.

  • View profile for Jeremy Tunis

    “Urgent Care” for Public Affairs, PR, Crisis, Content. Deep experience with BH/SUD hospitals, MedTech, other scrutinized sectors. Jewish nonprofit leader. Alum: UHS, Amazon, Burson, Edelman. Former LinkedIn Top Voice.

    16,104 followers

    Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.

  • View profile for Vikram Kharvi

    CEO - Bloomingdale PR | Fractional CMO - ANSSI Wellness | Founder - Vikypedia.com | Elevating Brands with a Strategic Blend of Marketing Communications

    32,583 followers

    Deepfakes aren’t a tech story. they’re a trust story A few days ago, a doctor in Hyderabad lost money to a #deepfake video that showed a cabinet minister “endorsing” an investment scheme on #Instagram. If that sounds distant, it isn’t. This is the new fraud funnel: authority, urgency, proof… all manufactured at scale. As #communicators and leaders, we can’t outsource this to compliance or IT. #Trust is now an operational KPI. What we as communicators need to do? •      Treat digital hygiene like fire safety. Run quarterly drills that teach people how fakes travel and how to report them •      Publish an authenticity sheet. List official handles, verified domains, escalation numbers and a simple “how to verify” flow for customers and employees •      Watermark outbound content and adopt content credentials where possible. Make the real easier to prove than the fake is to spread. •      Rewrite influencer and media contracts with an “authenticity clause” and takedown SLAs. If your face or footage is misused, minutes matter. •      Stand up a rapid debunk protocol. Pre-approved copy, visuals, spokespeople and a single public link that carries all corrections. •      Close the platform loop. Nominate a trust lead who keeps warm lines with platform policy teams so your takedown requests don’t start cold. Silence helps the scammer. Clarity helps the vulnerable. What would you add to this deepfake playbook? If you’ve seen a convincing fake lately, share it below and let’s decode why it worked. #digitalsafety #misinformation #brandprotection #reputationmanagement #contentauthenticity #aiethics #factchecking #onlinescams #communications

  • View profile for Christian Hyatt

    CEO & Co-Founder @ risk3sixty | Security, Compliance, and AI Built for CISOs

    48,628 followers

    This is one of the first reports I have seen on the risk and real world examples of Deepfakes. The Monetary Authority of Singapore (MAS) released a report last week that says in the last 18 months, deepfake technology has evolved into a weapon. it says that Financial institutions across Asia have reported multimillion-dollar losses from scams involving AI-generated video calls, fake documents, and impersonated executives. For example, the report says that one Hong Kong firm was tricked into transferring $25 million after a deepfake video conference featuring their CFO. 𝗪𝗵𝗮𝘁’𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴? According to MAS: → Deepfakes are now being used to defeat biometric authentication, impersonate trusted individuals, and spread misinformation that manipulates markets. → These attacks are no longer theoretical. They’re global, sophisticated, and increasingly difficult to detect. → The financial sector is especially vulnerable due to its reliance on digital identity verification, remote onboarding, and high-value transactions. 𝗪𝗵𝗮𝘁 𝗹𝗲𝗮𝗱𝗲𝗿𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗱𝗼 𝘁𝗼𝗱𝗮𝘆 Based on the best advice I've seen, here are a few recommendations: → Audit your biometric systems: Ensure liveness detection is in place. Test against deepfake samples regularly. → Train your teams: Run deepfake simulation exercises. Teach staff to spot signs of manipulated media and verify requests through trusted channels. → Strengthen high-risk processes: Add multi-factor authentication, separation of duties, and endpoint-level detection for privileged roles. → Monitor your brand: Use tools to detect impersonation attempts across social media, video platforms, and news outlets. (Check out Attack Surface Management and Threat Intelligence solutions.) → Update your incident response plans: Include deepfake scenarios. Establish rapid escalation channels and trusted communication pathways. → Collaborate: Share intelligence with peers, regulators, and ISACs. The threat is too complex for any one organization to tackle alone. --- 𝗔 𝗥𝗘𝗔𝗟 𝗘𝗫𝗔𝗠𝗣𝗟𝗘 Okay, just to prove this is real. Here is a screenshot of a deepfake our team did almost 𝟮 𝘆𝗲𝗮𝗿𝘀 𝗮𝗴𝗼 using free software.

  • The U.S. Senate just sent a very loud message to Big Tech: “𝐘𝐨𝐮𝐫 𝐀𝐈 𝐠𝐮𝐚𝐫𝐝𝐫𝐚𝐢𝐥𝐬 𝐚𝐫𝐞 𝐟𝐚𝐢𝐥𝐢𝐧𝐠 𝐚𝐧𝐝 𝐰𝐞 𝐰𝐚𝐧𝐭 𝐩𝐫𝐨𝐨𝐟 𝐲𝐨𝐮’𝐫𝐞 𝐟𝐢𝐱𝐢𝐧𝐠 𝐢𝐭.” 𝐂𝐚𝐬𝐞 𝐨𝐯𝐞𝐫𝐯𝐢𝐞𝐰 Eight U.S. senators formally questioned 𝐗, 𝐌𝐞𝐭𝐚, 𝐀𝐥𝐩𝐡𝐚𝐛𝐞𝐭, 𝐒𝐧𝐚𝐩, 𝐑𝐞𝐝𝐝𝐢𝐭, 𝐚𝐧𝐝 𝐓𝐢𝐤𝐓𝐨𝐤. The focus? Non-consensual, sexualized deepfakes. Across 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 and 𝐀𝐈 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐬. 𝐖𝐡𝐚𝐭 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐬 𝐚𝐫𝐞 𝐚𝐬𝐤𝐢𝐧𝐠 𝐟𝐨𝐫 Not statements. Not policies. Not blog posts. They want 𝐞𝐯𝐢𝐝𝐞𝐧𝐜𝐞. ➤ Internal documentation ➤ How images are created ➤ How they’re detected ➤ How moderation works ➤ How monetization is blocked This goes far beyond PR. 𝐓𝐡𝐞 𝐭𝐫𝐢𝐠𝐠𝐞𝐫 𝐆𝐫𝐨𝐤 came under scrutiny after reports it generated sexualized images, including of minors. That triggered attention from lawmakers. And an investigation by California’s AG. 𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 This is no longer a content moderation issue. It’s now: ✓ Product design ✓ AI governance ✓ Compliance at the model layer If your system can generate harm, you own the risk. 𝐓𝐡𝐞 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐬𝐡𝐢𝐟𝐭 The 𝐓𝐚𝐤𝐞 𝐈𝐭 𝐃𝐨𝐰𝐧 𝐀𝐜𝐭 criminalizes non-consensual sexualized imagery. States like New York are pushing: ➤ Mandatory AI labels ➤ Election-period deepfake bans The direction is clear. 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐭𝐚𝐤𝐞𝐚𝐰𝐚𝐲 This feels like the early days of data privacy. Except now, it’s about: People’s bodies. Identities. Democracy. The bar has moved from: “𝐖𝐞 𝐡𝐚𝐯𝐞 𝐫𝐮𝐥𝐞𝐬” to “𝐖𝐞 𝐜𝐚𝐧 𝐩𝐫𝐨𝐯𝐞 𝐚𝐛𝐮𝐬𝐞 𝐝𝐨𝐞𝐬𝐧’𝐭 𝐬𝐜𝐚𝐥𝐞.” 𝐂𝐲𝐛𝐞𝐫 𝐑𝐢𝐬𝐤 𝐩𝐞𝐫𝐬𝐩𝐞𝐜𝐭𝐢𝐯𝐞: Deepfakes are now a systemic threat, not an edge case 𝐖𝐡𝐚𝐭 𝐦𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐚𝐧𝐝 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬 𝐬𝐡𝐨𝐮𝐥𝐝 𝐞𝐯𝐞𝐫𝐲 𝐀𝐈 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐝𝐞𝐩𝐥𝐨𝐲 𝐭𝐨 𝐝𝐞𝐭𝐞𝐜𝐭 𝐚𝐧𝐝 𝐜𝐨𝐧𝐭𝐚𝐢𝐧 𝐝𝐞𝐞𝐩𝐟𝐚𝐤𝐞 𝐚𝐛𝐮𝐬𝐞 𝐚𝐭 𝐬𝐜𝐚𝐥𝐞? Curious where you’d draw the hard line 👇 -------------- Hi, I'm Harris D. Schwartz, 𝐅𝐫𝐚𝐜𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐈𝐒𝐎 𝐚𝐧𝐝 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐋𝐞𝐚𝐝𝐞𝐫. I help CEOs and executive teams strengthen their security posture and build resilient, compliant organizations. With 𝟑𝟎+ 𝐲𝐞𝐚𝐫𝐬 𝐚𝐜𝐫𝐨𝐬𝐬 𝐍𝐈𝐒𝐓, 𝐈𝐒𝐎, 𝐏𝐂𝐈, 𝐚𝐧𝐝 𝐆𝐃𝐏𝐑, I know how the right security decisions reduce risk and protect growth. If you are planning how your security program needs to evolve in 2026, this is the right time to have that conversation. #AI #AIGovernance #Deepfakes #ProductLeadership #TechPolicy #cybersecurity #security #aifrauds #riskmanagement

  • View profile for SAURABH SINGH

    CEO @ Appinventiv | Entrepreneur | Building AI-Led Future Intelligence | Forbes Iconic Leader

    205,741 followers

    India puts a 3 hour ultimatum on AI generated content! Under the amended IT Act, 2021 and the newly notified Information Technology Amendment Rules, 2026, here is what changes starting February 20, 2026: → If content looks real but is created or altered using AI, it must carry a prominent label. → If that content is illegal or harmful, platforms must remove it within 2 to 3 hours. → For non-consensual deepfakes, the window shrinks to just 2 hours. → The previous takedown window was 24 to 36 hours. That gap has been deliberately collapsed. Deepfake incidents in India rose 350% YOY. Every Indian consumes at least 5 AI/deepfake content on social media without realising that it is AI. We are living in an era where seeing is no longer believing. AI can clone voices, recreate faces, and fabricate entire scenarios with alarming precision. A single convincing deepfake can shift public opinion, move markets, destroy a brand, or devastate someone's personal life. The damage always travels faster than the correction. This is why this move matters. Labelling builds transparency. Faster takedowns limit real-world harm. The real test is not the rule. It is the execution. Meeting a 3-hour window at scale will require platforms to invest heavily in AI detection systems, automated flagging, rapid human review, and real-time governance infrastructure. The technology to generate deepfakes is cheap and fast. The technology to detect and remove them is neither. That asymmetry is the real battlefield. Still, I see this as a necessary and bold step. India is not saying stop innovating. It is saying innovate responsibly. This is bigger than a compliance update. It is a signal that regulation is finally trying to keep pace with the speed of AI. Innovation without accountability is just disruption without direction. #ai #socialmedia #aicontent

Explore categories