Inside the Laundromat #23: Generative AI & Deepfake Fraud in Banking Deloitte highlighted a 700 % increase in deepfake incidents in fintech during 2023 -especially audio deepfakes posing serious risks to banks and clients. Generative AI is making it cheaper and easier to clone voices or videos. In North America alone, deepfake‑enabled fraud surged 1,740 % between 2022 and 2023, and Q1 2025 fraud losses topped $200 million. Real-World Hits: Engineering firm Arup lost $25 million when attackers used a deepfake version of its CFO during a video call to authorize transfers. Similar CEO‑impersonation scams hit multiple FTSE-listed companies, with criminals initiating fake WhatsApp messages followed by voice‑cloned instructions to move funds. Why the system is still behind Traditional risk systems—based on business rules—aren’t built for synthetic AI fraud. Deloitte warns risk frameworks in many banks aren’t equipped for generative AI threats. The Prescription 🔹 Banks must invest in threat-based programs to detect anomalies and deepfake behavior. 🔹 Employee training is key: staff should be taught to spot red flags in audiovisual interactions. 🔹 Firms need to hire or reskill to build deepfake detection capabilities. Why This Matters for Financial Institutions GenAI doesn’t just automate content - it empowers entirely new methods of impersonation. Deepfakes amplify traditional social‑engineering by layering it with hyper-realistic audiovisual deception. That drastically raises the bar for fraud prevention and detection. Recommended Moves: 🔹 Simulate deepfake scams in phishing drills—make them realistic and test audio/video angles. 🔹 Red‑team AI‑voice attacks: produce mocks of your execs’ voices to train both tech and teams. 🔹 Deploy real‑time detection tools that analyze video/audio integrity using watermarking or anomaly detection. 🔹 Policy overhaul: draft protocols for verifying suspicious requests via secondary channels (e.g. confirmed calls or in-person signoff). 🔹 Cross-industry collaboration: share deepfake attack intelligence with other firms and regulators. What’s Next? 🔹 AI fraud loss may hit $11.5 billion in the U.S. within four years, due to GenAI phishing and impersonation attacks. 🔹 Regulatory shifts (e.g. EU AI Act) are on the horizon, pushing for transparency, watermarking, and auditability in synthetic media. Bottom line: Deepfake fraud is no longer futuristic fiction - it’s happening right now, and banks are still scrambling to catch up. Protecting clients and assets means thinking like the fraudster - then enacting plans to get ahead and stay ahead. #InsideTheLaundromatv#FinancialCrime #DeepfakeFraud #AIFraud #VoiceCloning #SyntheticIdentity #BankFraud #GenerativeAI #ImpersonationFraud #FraudDetection
Challenges and Solutions for Deepfake Technology
Explore top LinkedIn content from expert professionals.
Summary
Deepfake technology uses artificial intelligence to create highly realistic fake videos, audio, or images that mimic real people, making it easier for scammers and cybercriminals to deceive others. As deepfakes become harder to spot, businesses, banks, and public institutions face major challenges in protecting themselves against new forms of fraud, impersonation, and misinformation.
- Update verification protocols: Add multi-step approval processes and secondary checks to confirm sensitive requests, especially those made by voice or video.
- Educate your team: Train employees and stakeholders to recognize signs of deepfake fraud and respond appropriately to suspicious interactions.
- Monitor and collaborate: Use monitoring tools to detect unusual activity, and share information about deepfake threats with other organizations and authorities.
-
-
Imagine this: You receive an email from your company’s Chief Financial Officer. It’s marked confidential. It mentions a sensitive transaction that needs to be handled discreetly. You’re suspicious. It sounds unusual - possibly phishing. But then you’re invited to a video call to discuss it. You join. On screen, you see the CFO. You see other members of the management. People you recognise. Voices you know. Everyone looks and sounds exactly as they should. Your doubts begin to fade. You authorise a transfer. $25 million. Days later, you check in with the head office to confirm everything went through. What? Who? When? The company’s management never sent a message. The meeting never happened. The people on the call weren’t real. And the money is gone. This isn’t a hypothetical risk. It happened. A finance employee at a multinational firm in Hong Kong was tricked into wiring $25 million after attending a video call where every participant - including the CFO - was a deepfake. What is a deepfake? It’s a highly sophisticated type of fraud where AI-generated video and audio is designed to mimic real people - in real time. How it works: ↳ Scammers collect publicly available footage and train AI to replicate speech, tone, and behaviour of people ↳ They create meetings imitating the face and voice of people the victim trusts. ↳ They trick their victim to perform transactions. As deepfake technology evolves yoy, it’s becoming harder to differentiate what is real and what is fake, no matter how educated you are. This is not about weak passwords or bad policies. It’s about trust being manipulated with precision. What can financial institutions do to protect themselves from deepfake fraud? ↳ Train teams to recognise social engineering - even when it looks and sounds familiar ↳ Don’t rely on voice or video alone for verification ↳ Use multi-step approvals for sensitive transactions ↳ Add deepfake risks into your fraud response and incident procedures ↳ Monitor communication patterns that deviate from normal practice ↳ Ensure escalation paths are accessible and respected, even when urgency is claimed. This scam didn’t succeed because the employee wasn’t careful. It succeeded because the tools of deception are evolving faster than most internal controls. Be alert, educate your team, and take care!
-
AI PR Nightmares Part 2: When AI Clones Voices, Faces, and Authority. What Happened: Last week, a sophisticated AI-driven impersonation targeted White House Chief of Staff Susie Wiles. An unknown actor, using advanced AI-generated voice cloning, began contacting high-profile Republicans and business leaders, posing as Wiles. The impersonator requested sensitive information, including lists of potential presidential pardon candidates and even cash transfers. The messages were convincing enough that some recipients engaged before realizing the deception. Wiles’ personal cellphone contacts were reportedly compromised, giving the impersonator access to a network of influential individuals. This incident underscores a huge growing threat: AI-generated deepfakes are becoming increasingly realistic and accessible, enabling malicious actors to impersonate individuals with frightening accuracy. From cloned voices to authentic looking fabricated videos, the potential for misuse spans politics, finance, and way beyond. And it needs your attention now. 🔍 The Implications for PR and Issues Management: As AI-generated impersonations become more prevalent, organizations must proactively address the associated risks as part of their ongoing crisis planning. Here are key considerations: 1. Implement New Verification Protocols: Establish multi-factor authentication for communications, especially those involving sensitive requests. Encourage stakeholders to verify unusual requests through secondary channels. 2. Educate Constituents: Conduct training sessions to raise awareness about deepfake technologies and the signs of AI-generated impersonations. An informed network is a critical defense. 3. Develop a Deepfakes Crisis Plan: Prepare for potential deepfake incidents with a clear action plan, including communication strategies to address stakeholders and the public promptly. 4. Monitor Digital Channels: Utilize your monitoring tools to detect unauthorized use of your organization’s or executives’ likenesses online. Early detection and action can mitigate damage. 5. Collaborate with Authorities: In the event of an impersonation, work closely with law enforcement and cybersecurity experts to investigate and respond effectively. ———————————————————— The rise of AI-driven impersonations is not a distant threat, it’s a current reality and only going to get worse as the tech becomes more sophisticated. If you want to think and talk more about how to prepare for this and other AI related PR and issues management topics, follow along here with my series or DM if I can help your organization prepare or respond.
-
The rapid rise of AI-generated media - particularly deepfakes and convincingly altered content - brings us to a crossroads in how we interact with information. Suddenly, seeing isn't necessarily believing. This shift raises critical questions: How do we verify what’s real, and how do we address creators' intentions behind such content? Do we just categorize them as creative output? Addressing this challenge likely requires multiple, coordinated approaches rather than a singular solution. One fundamental strategy involves enhancing public media literacy. Teaching ourselves and our communities how to recognize misinformation, critically evaluate sources to help reduce the spread of misleading information. Initiatives like educational campaigns, school programs, and public-service messaging could strengthen our collective defenses against misinformation. Simultaneously, technology companies producing or distributing AI-generated content could implement practical measures to build transparency and trust. For instance: - Clearly watermarking content generated by AI tools. - Requiring upfront disclosures about synthetic or substantially altered media. - Employing specialized authenticity verification technologies. Moreover, adopting clear ethical standards within industries utilizing AI-driven media - similar to those upheld in professional journalism - could encourage greater accountability. Finally, regulatory frameworks will be important - but they must be carefully designed. Excessive restrictions could inadvertently stifle innovation and legitimate expression. Conversely, too little oversight leaves society vulnerable to harmful deepfakes, especially in contexts like elections. Targeted and balanced regulations can minimize harms without impeding creative and productive uses of AI. Where should efforts be prioritized most urgently - strengthening public awareness, establishing clear industry standards, or developing nuanced regulatory policies? #innovation #technology #future #management #startups
-
🧠 𝗗𝗲𝗲𝗽𝗳𝗮𝗸𝗲 𝗛𝗶𝗿𝗶𝗻𝗴: 𝗧𝗵𝗲 𝗡𝗲𝘄 𝗖𝘆𝗯𝗲𝗿 𝗘𝗻𝘁𝗿𝘆 𝗣𝗼𝗶𝗻𝘁 Cyberattacks are no longer just ransomware and malware. A new threat is targeting companies from inside by infiltrating job interviews using AI-generated identities. 𝗔𝘁𝘁𝗮𝗰𝗸𝗲𝗿𝘀 𝗮𝗿𝗲 𝗻𝗼𝘄 𝘂𝘀𝗶𝗻𝗴: • AI voice cloning • Deepfake video filters • Stolen resumes from real engineers • Fabricated stories that are hard to verify 𝗧𝗵𝗲 𝗴𝗼𝗮𝗹? Access internal systems, steal source code, credentials, sensitive data, or conduct silent long-term espionage. 🚩 𝗥𝗲𝗱 𝗙𝗹𝗮𝗴𝘀 𝗗𝘂𝗿𝗶𝗻𝗴 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄𝘀: • Lip movement not matching the voice • Unnatural or overly static camera feed • Scripted answers with no real depth • Inability to explain basics of their own experience • Continuous “technical issues” or camera refusal • Suspicious LinkedIn history or inconsistent timeline 🛡️ 𝗛𝗼𝘄 𝘁𝗼 𝗣𝗿𝗼𝘁𝗲𝗰𝘁 𝗬𝗼𝘂𝗿 𝗛𝗶𝗿𝗶𝗻𝗴 𝗣𝗿𝗼𝗰𝗲𝘀𝘀: • Use multi-stage interviews (technical + live challenges) • Verify identity through validated platforms and email domains • Avoid predictable questions use real-time problem solving • Analyze CV metadata and external footprint • Use AI anomaly-detection tools for audio/video manipulation • Apply Zero Trust for onboarding and initial access • Educate HR + Tech teams about AI-powered fraud - #CyberSecurity #Deepfake #Hiring #AIThreats #ZeroTrust #ThreatAwareness #SocialEngineering #InfoSec #CyberAwareness #HRTech #Cybercrime #DeXpose #DarkWeb #ThreatIntel
-
Harsh truth: AI has opened up a Pandora's box of threats. The most concerning one? The ease with which AI can be used to create and spread misinformation. Deepfakes (AI-generated content that portrays something false as reality) are becoming increasingly sophisticated & challenging to detect. Take the attached video - a fake video of Morgan Freeman, which looks all too real. AI poses a huge risk to brands & individuals, as malicious actors could use deepfakes to: • Create false narratives about a company or its products • Impersonate executives or employees to damage credibility • Manipulate public perception through fake social media posts The implications for PR professionals are enormous. How can we maintain trust and credibility in a world where seeing is no longer believing? The answer lies in proactive preparation and swift response. Here are some key strategies for navigating the AI misinformation minefield: 🔹 1. Educate your team: Ensure everyone understands the threat of deepfakes and how to spot potential fakes. Regular training is essential. 🔹 2. Monitor vigilantly: Keep a close eye on your brand's online presence. Use AI-powered tools to detect anomalies and potential threats. 🔹 3. Have a crisis plan: Develop a clear protocol for responding to AI-generated misinformation. Speed is critical to contain the spread. 🔹 4. Emphasize transparency: Build trust with your audience by being open and honest. Admit mistakes and correct misinformation promptly. 🔹 5. Invest in verification: Partner with experts who can help authenticate content and separate fact from fiction. By staying informed, prepared, and proactive, PR professionals can navigate this new landscape and protect their brands' reputations. The key is to embrace AI as a tool while remaining vigilant against its potential misuse. With the right strategies in place, we can harness the power of AI to build stronger, more resilient brands in the face of the misinformation minefield.
-
Irony at its finest: The same man who gave everyone the tools to create perfect deepfake videos recently came out saying "I am very nervous that we have an impending, significant fraud crisis." Interesting timing for a guy who co-founded the iris-scanning Worldcoin project while simultaneously building the AI tools that make voice and video impersonation trivial. What Sam Altman might not realize is that regulated industries have been dealing with this "impending" crisis for years already, and we've built the infrastructure to defend against it. Here are 6 things I'd share with Sam Altman about the AI fraud crisis he's highlighting: 1. The Crisis Is Already Here The AI fraud crisis isn't "coming very soon," it's been happening since 2022. While OpenAI was perfecting generative AI, we've been tracking real-world fraud attacks using these exact tools. The timeline matters because defense strategies need to evolve with the threat. 2. Tools Have Dual Use by Design Sora creates incredibly realistic videos from text prompts, which has amazing creative applications. But the same technology that lets filmmakers create content also enables bad actors to create convincing deepfakes. This dual-use reality is something the industry needs to grapple with more openly. 3. Authentication Needs to Evolve Beyond Single Points Voice authentication alone is vulnerable, but so is any single-factor approach. The real opportunity is in layered verification systems that combine multiple biometric factors with behavioral analytics and real-time risk assessment. AI can defeat individual components, but sophisticated multi-layer approaches are much more resilient. 4. Centralized Biometric Data Creates New Risks Worldcoin's iris-scanning approach aims to solve digital identity, but centralizing biometric data creates its own security challenges. The question isn't whether biometrics are good or bad, but how to architect identity systems that don't create single points of failure. 5. Defense Requires Proactive Architecture When you say "AI has defeated most authentication methods," the response should be building systems that assume some level of compromise from day one. Zero Trust identity architecture with continuous verification isn't just buzzwords; it's a practical approach to this exact problem. 6. The Real Challenge Is Systemic Individual fraud detection is important, but the bigger challenge is building a trust infrastructure that works at scale. This means thinking about identity verification as foundational internet infrastructure, not just a feature companies bolt on. Bottom Line: The fraud crisis you're warning about is real and urgent. Let's focus on building the solutions together.
-
Deepfakes have been a hot topic and on the radar of policymakers around the world along with deepfake detection algorithms and watermarking. According to the latest Stanford Institute for Human-Centered Artificial Intelligence (HAI) AI Index Report, in 2023, 181 AI-related bills were introduced in the US. At least half of these bills target deepfakes with deepfake porn and the use of deepfakes in elections being the top concerns. Meanwhile, big tech companies are developing their own watermarking methods and detection techniques. Current watermarking methods may accidentally hide the clues that detection algorithms look for in fake images, affecting their accuracy. To address this issue, this paper introduces “AdvMark” a new technique of modifying traditional watermarking that works proactively with deepfake detection. According to previous studies, proactive watermark injection and passive deepfake detection were considered to be completely independent. AdvMark functions as a plug-and-play enhancement for existing watermark systems and accomplishes both provenance tracking and detectability enhancement. With all the obvious benefits it promises, watermarking is not a panacea when it comes to fighting deepfakes. It’s not 100% accurate, and both tailored AI regulation and adherence to industry standards play critical roles. Even then it’s not enough because what it all comes down to - a mere second that it takes to see an image – is critical thinking, and the urge to dig deeper in search of truth and credible sources. Unfortunately, not everyone has the urge or the time.
-
As generative AI technology evolves, so do the tools designed to detect it's usage. Yet, in the race to develop high-performing detection systems, a critical element risks being overlooked. An idea I'll be championing at the International AI Safety Institutes convening tomorrow is the importance of setting benchmarks for detecting AI content (primarily in malicious and deceptive contexts) that reflect practical equity concerns as well as more typical metrics of technical effectiveness. Effectiveness is often measured through technical benchmarks: accuracy, speed, scalability, and versatility. While these metrics are important, they fail to capture the full complexities of real-world use. At WITNESS -- including in our global training work and our Deepfakes Rapid Response Force -- we’ve consistently observed a noticeable gap between the technical capabilities of AI detection tools and their practical value in high-stakes situations globally. This "detection equity gap" is most pronounced in the Global Majority world. In a new post highlighting upcoming WITNESS work, shirin anlen lays out 6 principles to determine if a tool is genuinely effective - noting that it must align not only with rigorous technical standards but also with the practical realities faced by those using it. 🌍 Real-World Challenges: Tools must be designed to adapt to these imperfect conditions, or frontline users must be equipped with complementary resources to navigate unpredictable cases. ❓ Transparency and Explainability: Detection tools often provide binary results accompanied by confidence scores, but these alone are insufficient. To make results actionable and reliable, tools must also offer additional info, such as: guidance on interpreting results, types of manipulations the tool was trained to detect, info on the dataset used for training, and limitations of the tool, including how content quality may influence outcomes. 🫴 Accessibility: Technical excellence is meaningless if tools are inaccessible to diverse communities because of inadequacies in training data, requirements for expertise or confusing inferfaces. ⚖️ Fairness: The fairness of detection tools hinges on the fairness of their training data. Ensuring diverse and representative training data is essential for achieving fair and accurate detection outcomes. 💪 Durability: As deepfake technology evolves rapidly, detection tools must keep pace. Tools need to be designed with adaptability in mind, capable of responding to the fast-changing landscape of generative techniques. 👀 Contextualization to other skill sets: AI detection tools are frequently applied to complex, unpredictable content, where relying on them as standalone solutions often falls short. Instead, when possible, these tools should be considered as part of a broader verification process and not as a complete solution. https://lnkd.in/etWtKivZ #AI #detection #AISafety #deepfakes #synthetic
-
This is one of the first reports I have seen on the risk and real world examples of Deepfakes. The Monetary Authority of Singapore (MAS) released a report last week that says in the last 18 months, deepfake technology has evolved into a weapon. it says that Financial institutions across Asia have reported multimillion-dollar losses from scams involving AI-generated video calls, fake documents, and impersonated executives. For example, the report says that one Hong Kong firm was tricked into transferring $25 million after a deepfake video conference featuring their CFO. 𝗪𝗵𝗮𝘁’𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴? According to MAS: → Deepfakes are now being used to defeat biometric authentication, impersonate trusted individuals, and spread misinformation that manipulates markets. → These attacks are no longer theoretical. They’re global, sophisticated, and increasingly difficult to detect. → The financial sector is especially vulnerable due to its reliance on digital identity verification, remote onboarding, and high-value transactions. 𝗪𝗵𝗮𝘁 𝗹𝗲𝗮𝗱𝗲𝗿𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗱𝗼 𝘁𝗼𝗱𝗮𝘆 Based on the best advice I've seen, here are a few recommendations: → Audit your biometric systems: Ensure liveness detection is in place. Test against deepfake samples regularly. → Train your teams: Run deepfake simulation exercises. Teach staff to spot signs of manipulated media and verify requests through trusted channels. → Strengthen high-risk processes: Add multi-factor authentication, separation of duties, and endpoint-level detection for privileged roles. → Monitor your brand: Use tools to detect impersonation attempts across social media, video platforms, and news outlets. (Check out Attack Surface Management and Threat Intelligence solutions.) → Update your incident response plans: Include deepfake scenarios. Establish rapid escalation channels and trusted communication pathways. → Collaborate: Share intelligence with peers, regulators, and ISACs. The threat is too complex for any one organization to tackle alone. --- 𝗔 𝗥𝗘𝗔𝗟 𝗘𝗫𝗔𝗠𝗣𝗟𝗘 Okay, just to prove this is real. Here is a screenshot of a deepfake our team did almost 𝟮 𝘆𝗲𝗮𝗿𝘀 𝗮𝗴𝗼 using free software.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development