How Companies Are Addressing Deepfake Risks

Explore top LinkedIn content from expert professionals.

Summary

Deepfakes are hyper-realistic audio, video, or image forgeries created by artificial intelligence that can convincingly mimic real people. Companies are urgently responding to deepfake risks because scammers are using these convincing fakes to steal money, compromise sensitive information, and erode trust in leadership communications.

  • Implement verification steps: Require secondary confirmation for financial transactions and sensitive requests, such as calling back the requester using a trusted number or using known safe words.
  • Train and empower staff: Regularly teach employees how to spot deepfake red flags and encourage them to challenge unusual requests, even if they appear to come from top executives.
  • Adopt advanced detection tools: Invest in AI-powered software that can analyze video and audio for signs of manipulation and set up clear reporting channels for suspicious incidents.
Summarized by AI based on LinkedIn member posts
  • View profile for Matthew Hedger

    Financial Crime and AML Consultant | Former CIA Officer | Keynote Speaker and Expert in Anti-Money Laundering, Insider Risk and Organized Crime.

    5,251 followers

    Inside the Laundromat #23: Generative AI & Deepfake Fraud in Banking Deloitte highlighted a 700 % increase in deepfake incidents in fintech during 2023 -especially audio deepfakes posing serious risks to banks and clients. Generative AI is making it cheaper and easier to clone voices or videos. In North America alone, deepfake‑enabled fraud surged 1,740 % between 2022 and 2023, and Q1 2025 fraud losses topped $200 million. Real-World Hits: Engineering firm Arup lost $25 million when attackers used a deepfake version of its CFO during a video call to authorize transfers. Similar CEO‑impersonation scams hit multiple FTSE-listed companies, with criminals initiating fake WhatsApp messages followed by voice‑cloned instructions to move funds. Why the system is still behind Traditional risk systems—based on business rules—aren’t built for synthetic AI fraud. Deloitte warns risk frameworks in many banks aren’t equipped for generative AI threats. The Prescription 🔹 Banks must invest in threat-based programs to detect anomalies and deepfake behavior. 🔹 Employee training is key: staff should be taught to spot red flags in audiovisual interactions. 🔹 Firms need to hire or reskill to build deepfake detection capabilities. Why This Matters for Financial Institutions GenAI doesn’t just automate content - it empowers entirely new methods of impersonation. Deepfakes amplify traditional social‑engineering by layering it with hyper-realistic audiovisual deception. That drastically raises the bar for fraud prevention and detection. Recommended Moves: 🔹 Simulate deepfake scams in phishing drills—make them realistic and test audio/video angles. 🔹 Red‑team AI‑voice attacks: produce mocks of your execs’ voices to train both tech and teams. 🔹 Deploy real‑time detection tools that analyze video/audio integrity using watermarking or anomaly detection. 🔹 Policy overhaul: draft protocols for verifying suspicious requests via secondary channels (e.g. confirmed calls or in-person signoff). 🔹  Cross-industry collaboration: share deepfake attack intelligence with other firms and regulators. What’s Next? 🔹  AI fraud loss may hit $11.5 billion in the U.S. within four years, due to GenAI phishing and impersonation attacks. 🔹  Regulatory shifts (e.g. EU AI Act) are on the horizon, pushing for transparency, watermarking, and auditability in synthetic media. Bottom line: Deepfake fraud is no longer futuristic fiction - it’s happening right now, and banks are still scrambling to catch up. Protecting clients and assets means thinking like the fraudster - then enacting plans to get ahead and stay ahead. #InsideTheLaundromatv#FinancialCrime #DeepfakeFraud #AIFraud #VoiceCloning #SyntheticIdentity #BankFraud #GenerativeAI #ImpersonationFraud #FraudDetection

  • View profile for Jason Rebholz
    Jason Rebholz Jason Rebholz is an Influencer

    Securing the agentic workforce | Co-founder & CEO at Evoke Security | Former CISO & IR leader

    32,163 followers

    There’s more to the $25 million deepfake story than what you see in the headlines. I pulled the original story to get the full scoop. Here are the steps the scammer took: 1. The scammers sent a phishing email to up to three finance employees in mid-January, saying a “secret transaction” had to be done. 2. One of the finance employees fell for the phishing email. This led to the scammers inviting the finance employee to a video conference. The video conference included what appeared to be the company CFO, other staff, and some unknown outsiders. This was the deep fake technology at work, mimicking employees' faces and voices. 3. On the group video conference, the scammers asked the finance employee to do a self-introduction but never interacted with them. This limited the likelihood of getting caught. Instead, the scammers just gave orders from a script and moved on to the next phase of the attack. 4. The scammers followed up with the victim via instant messaging, emails, and one-on-one video calls using deep fakes. 5. The finance employee then made 15 transfers totaling $25.6 million USD. As you can see, deep fakes were a key tool for the attacker, but persistence was critical here too. The scammers did not let up and did all that they could to apply pressure on the individual to transfer the funds. So, what do businesses do about mitigating this type of attack in the age of deep fakes? - Always report suspicious phishing emails to your security team. In this context, the other phished employees could have been an early warning that something weird was happening. - Trust your gut. The finance employee reported a “moment of doubt” but ultimately went forward with the transfer after the video call and persistence. If something doesn’t feel right, slow down and verify. - Lean into out-of-band authentication for verification. Use a known good method of contact with the individual to verify the legitimacy of a transaction. - Explore technology driven identify verification platforms for high dollar wire transfers. This can help reduce the chance of human error. And one of the best pieces of advice I saw was from Nate Lee yesterday, who called out building a culture where your employees are empowered to verify transaction requests. Nate said the following “The CEO/CFO and everyone with power to transfer money needs to be aligned on and communicate the above. You want to ensure the person doing the transfer doesn't feel that by asking for additional validation that they're pushing back against or acting in a way that signals they don't trust the leader.” Stay safe (and real) out there. ------------------------------ 📝 Interested in leveling up your security knowledge? Sign up for my weekly newsletter using the blog link at the top of this post.

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    20,614 followers

    Fraud no longer hides in the shadows. It might show up disguised as someone you know. Like when the CEO calls and her voice on the phone sounds exactly right. Her urgency feels real, and the wire transfer request to a new bank account seems legitimate, so accounting releases the funds. And just like that, the company loses $20k to a fraudster who weaponized AI. This isn't science fiction. It's happening right now to individuals and organizations alike. Fraudsters are creating disturbingly real AI deepfakes that can fool even the most cautious people. And companies need strategies to combat them. Because those audio and visual cues we've relied on for decades are no longer reliable indicators of authenticity when it comes to AI deepfakes. Organizations can fight back with these defense strategies: ✔ Stay cautious and be wary of anyone requesting money or personal information, even if they look or sound like someone you trust. ✔ Don’t send money or share sensitive data in response to a single phone or video call. Phone numbers can be spoofed, so always verify a person’s identity by contacting them separately at a number you trust. ✔ Use small action requests, like asking a person to turn their head, blink repeatedly, or hum a song while on a video or phone call. If they decline, freeze up, or go silent, it could be a fraudster. ✔ Establish a safe word that only your inner circle knows to confirm the identity of someone claiming to be a colleague, family member, or friend.   ✔ Use strong passwords. Enable multifactor authentication (MFA) on all company devices and accounts whenever possible. And don’t forget to report AI deepfakes to law enforcement and any relevant social media channels, websites, and other platforms where the encounter took place. All of these tips ALSO work for individuals too because hackers like causing havoc with anyone they can. The question isn't whether AI deepfakes will target your organization. It's whether your organization will be ready when it does.   Food for thought as we kick off Cybersecurity Awareness Month.   ♻ Share our infographic to help companies combat AI deepfakes. 

  • View profile for Vikram Kharvi

    CEO - Bloomingdale PR | Fractional CMO - ANSSI Wellness | Founder - Vikypedia.com | Elevating Brands with a Strategic Blend of Marketing Communications

    32,583 followers

    Deepfakes aren’t a tech story. they’re a trust story A few days ago, a doctor in Hyderabad lost money to a #deepfake video that showed a cabinet minister “endorsing” an investment scheme on #Instagram. If that sounds distant, it isn’t. This is the new fraud funnel: authority, urgency, proof… all manufactured at scale. As #communicators and leaders, we can’t outsource this to compliance or IT. #Trust is now an operational KPI. What we as communicators need to do? •      Treat digital hygiene like fire safety. Run quarterly drills that teach people how fakes travel and how to report them •      Publish an authenticity sheet. List official handles, verified domains, escalation numbers and a simple “how to verify” flow for customers and employees •      Watermark outbound content and adopt content credentials where possible. Make the real easier to prove than the fake is to spread. •      Rewrite influencer and media contracts with an “authenticity clause” and takedown SLAs. If your face or footage is misused, minutes matter. •      Stand up a rapid debunk protocol. Pre-approved copy, visuals, spokespeople and a single public link that carries all corrections. •      Close the platform loop. Nominate a trust lead who keeps warm lines with platform policy teams so your takedown requests don’t start cold. Silence helps the scammer. Clarity helps the vulnerable. What would you add to this deepfake playbook? If you’ve seen a convincing fake lately, share it below and let’s decode why it worked. #digitalsafety #misinformation #brandprotection #reputationmanagement #contentauthenticity #aiethics #factchecking #onlinescams #communications

  • View profile for Greg Jones

    Helping founders remove the bottlenecks capping revenue and time — without hiring more people or working longer hours | The Elite Business Strategist | Founders Freedom™

    6,064 followers

    $25.6 million lost in 30 minutes. The CFO was fake. The Zoom call was real. That’s not a movie script. It’s 2025 reality. At Arup, a finance professional wired $25.6M after a video call with what he thought was his CFO and colleagues. They were all deepfakes. And Arup isn’t alone. Ferrari recently faced a real-time voice clone of its CEO, Benedetto Vigna, used in an attempted acquisition scam. The impersonation was so convincing it almost worked—until an executive challenged the fake CEO with a question only the real one could answer. I’ve spent over 25 years in computer forensics and cybersecurity, and I can tell you this: AI-powered deepfake scams are now on the list of the most dangerous, trust-shattering threats enterprises face. The Escalating Reality of Executive Deepfakes: • WSJ (Aug 2025): Fraudsters are spoofing CEOs’ voices and faces in real time. • In Q1 2025, businesses lost $200M+ to executive deepfakes. By mid-year, losses hit $410M. • U.S. projections: $40B in AI fraud losses by 2027. • 51% of cybersecurity professionals report their companies have already been targeted. Has your company’s board ever discussed this threat? (Most haven’t.) *Why Deepfakes Are Different* Traditional phishing relies on red flags: misspellings, bad links, odd domains. Deepfakes weaponize trust itself: • A “CEO” answering you live on Zoom. • A “CFO” giving urgent instructions. • Realistic tone, cadence, and facial expressions. DeepStrike reports a 900% increase in attack volume YoY. ID fraud using deepfakes surged 3,000% in 2023. The Cost of Inaction: • Avg loss per incident: $500K • Major enterprise events: $25M+ • Cumulative losses since 2019: nearly $900M (+400% in just 18 months) But the biggest loss isn’t money—it’s trust in leadership communication. If employees can’t trust a CEO’s face or voice, every critical decision slows—or worse, gets manipulated. What Boards Must Do Now: 1. Verification First – Multi-channel confirmation for sensitive actions, no matter how urgent. 2. Deploy Detection – AI tools that flag anomalies in audio and video. 3. Board & Finance Training – Equip teams to challenge requests that feel even slightly off. 4. Zero-Trust Communication – Treat executive voice and video as potentially compromised. *Closing Perspective* At Mandiant Labs, I learned one lesson: attackers don’t wait for regulation. They exploit gaps long before governments catch up. That’s what’s happening now. The EU AI Act and U.S. AI bills are slow. Deepfake attackers are moving at AI speed. The question is no longer “Could this happen to us?” It’s “When—and will we be ready?” Greg Jones Founder & Principal, PRIMSEC Advisor to enterprise leaders on organizational and cybersecurity strategy, insider threats, and AI-driven security architecture Your Turn: Is your board prepared for deepfake CEO fraud? Comment with your company’s first line of defense and share this post so your CFO and leadership team see it before it’s too late.

  • View profile for Anna Stylianou

    AML & Anti-Financial Crime Advisor | Governance & risk oversight | Complex case assessments | Practical AML training

    51,188 followers

    Imagine this: You receive an email from your company’s Chief Financial Officer. It’s marked confidential. It mentions a sensitive transaction that needs to be handled discreetly. You’re suspicious. It sounds unusual - possibly phishing. But then you’re invited to a video call to discuss it. You join. On screen, you see the CFO. You see other members of the management. People you recognise. Voices you know. Everyone looks and sounds exactly as they should. Your doubts begin to fade. You authorise a transfer. $25 million. Days later, you check in with the head office to confirm everything went through. What? Who? When? The company’s management never sent a message. The meeting never happened. The people on the call weren’t real. And the money is gone. This isn’t a hypothetical risk. It happened. A finance employee at a multinational firm in Hong Kong was tricked into wiring $25 million after attending a video call where every participant - including the CFO - was a deepfake. What is a deepfake? It’s a highly sophisticated type of fraud where AI-generated video and audio is designed to mimic real people - in real time. How it works: ↳ Scammers collect publicly available footage and train AI to replicate speech, tone, and behaviour of people ↳ They create meetings imitating the face and voice of people the victim trusts. ↳ They trick their victim to perform transactions. As deepfake technology evolves yoy, it’s becoming harder to differentiate what is real and what is fake, no matter how educated you are. This is not about weak passwords or bad policies. It’s about trust being manipulated with precision. What can financial institutions do to protect themselves from deepfake fraud? ↳ Train teams to recognise social engineering - even when it looks and sounds familiar ↳ Don’t rely on voice or video alone for verification ↳ Use multi-step approvals for sensitive transactions ↳ Add deepfake risks into your fraud response and incident procedures ↳ Monitor communication patterns that deviate from normal practice ↳ Ensure escalation paths are accessible and respected, even when urgency is claimed. This scam didn’t succeed because the employee wasn’t careful. It succeeded because the tools of deception are evolving faster than most internal controls. Be alert, educate your team, and take care!

  • View profile for Christian Hyatt

    CEO & Co-Founder @ risk3sixty | Security, Compliance, and AI Built for CISOs

    48,629 followers

    This is one of the first reports I have seen on the risk and real world examples of Deepfakes. The Monetary Authority of Singapore (MAS) released a report last week that says in the last 18 months, deepfake technology has evolved into a weapon. it says that Financial institutions across Asia have reported multimillion-dollar losses from scams involving AI-generated video calls, fake documents, and impersonated executives. For example, the report says that one Hong Kong firm was tricked into transferring $25 million after a deepfake video conference featuring their CFO. 𝗪𝗵𝗮𝘁’𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴? According to MAS: → Deepfakes are now being used to defeat biometric authentication, impersonate trusted individuals, and spread misinformation that manipulates markets. → These attacks are no longer theoretical. They’re global, sophisticated, and increasingly difficult to detect. → The financial sector is especially vulnerable due to its reliance on digital identity verification, remote onboarding, and high-value transactions. 𝗪𝗵𝗮𝘁 𝗹𝗲𝗮𝗱𝗲𝗿𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗱𝗼 𝘁𝗼𝗱𝗮𝘆 Based on the best advice I've seen, here are a few recommendations: → Audit your biometric systems: Ensure liveness detection is in place. Test against deepfake samples regularly. → Train your teams: Run deepfake simulation exercises. Teach staff to spot signs of manipulated media and verify requests through trusted channels. → Strengthen high-risk processes: Add multi-factor authentication, separation of duties, and endpoint-level detection for privileged roles. → Monitor your brand: Use tools to detect impersonation attempts across social media, video platforms, and news outlets. (Check out Attack Surface Management and Threat Intelligence solutions.) → Update your incident response plans: Include deepfake scenarios. Establish rapid escalation channels and trusted communication pathways. → Collaborate: Share intelligence with peers, regulators, and ISACs. The threat is too complex for any one organization to tackle alone. --- 𝗔 𝗥𝗘𝗔𝗟 𝗘𝗫𝗔𝗠𝗣𝗟𝗘 Okay, just to prove this is real. Here is a screenshot of a deepfake our team did almost 𝟮 𝘆𝗲𝗮𝗿𝘀 𝗮𝗴𝗼 using free software.

  • View profile for Adnan Amjad

    US Cyber Leader at Deloitte

    4,349 followers

    Deepfake-related fraud is increasingly omnipresent. Singular points of security are no longer reliable enough – especially for high-stakes environments like financial service organizations, as a recent The Wall Street Journal article featuring Deloitte’s Anish Srivastava explains (https://deloi.tt/4nlto2c).   To address these complex and evolving threats, banks and financial institutions should implement multi-layered security “defense-in-depth" strategies that can proactively detect, mitigate, and respond to deepfake threats and restore trust.  Organizations can implement multiple layers of security to protect against deepfakes, including secure user onboarding, contextual analysis, media liveness confirmation, strong authentication and session binding measures, and deepfake detection AI.      Maintaining deepfake protection requires ongoing employee training, regular security audits, continuous monitoring of emerging threats, and prompt response to incidents.  

  • View profile for Jeffrey W. Brown

    Chief Security Advisor for Financial Services at Microsoft, Author & NACD certified boardroom director Helping CISOs Turn AI & Cybersecurity Risk into Strategic Advantage

    12,331 followers

    Your voice. Your face. Your brand. All weaponized. The Wall Street Journal reports a sharp rise in CEO impersonator scams that are fueled by AI deepfakes. Fraudsters train AI on public speeches, podcasts, even a few minutes of clean video is enough. The fakes can respond in real time, mimicking tone and cadence. If this isn't concerning enough, executive deepfake scams caused $200M in losses in Q1 2025 in the US alone. Executives with a public presence are prime targets. As Guardio’s Nati Tal put it: “A few minutes of clean audio or video is extremely valuable to anyone looking to create these scams.” There are no silver bullets, but leaders should: Verify the human, not the request → Callback, signed Teams/Zoom, rotating code phrases. Lock down money movement → Dual control for wires & vendor changes; never approve via SMS, WhatsApp, or email alone. Break the urgency spell → Mandatory 10–15 min “cool-off” period before approvals. Deepfakes aren’t just a tech problem. They’re a social engineering problem. Remember that: ✔️ Detection tools fail. ✔️ Strong authentication beats trusting faces or voices. ✔️ Discipline in business processes is the real defense. If your people haven’t been targeted yet, they will be. The only question: when the fake you comes calling… will they pause, verify, and protect? Read the full story at: (paywalled) https://lnkd.in/endnJ9sg

  • View profile for Fritz Hesse

    Chief Technology Officer at Riskonnect

    3,122 followers

    👀 Two years ago, AI gave us “Will Smith eating spaghetti.” Today, it can fool a CFO. Gen AI video is evolving at lightning speed — and so are the risks. In just months, Google’s Veo 3 can already create hyper-realistic videos with professional sound. What once took studios can now be done by describing your vision to AI. But the same tools fueling creativity also amplify danger: • Remember the infamous “Will Smith eating spaghetti” deepfake from 2022? Back then it was laughable. Today, AI videos are nearly flawless. • Parents are being warned about ransom scams using fake videos of their kids. • Arup lost $25M when an employee was tricked by a deepfake CFO on a video call. The numbers are scary: • Deloitte projects U.S. fraud losses from Gen AI rising from $12.3B (2023) to $40B (2027). • Gartner predicts 30% of enterprises will abandon face biometrics by 2026 due to deepfake attacks. So what can leaders do? • Pair every new tech adoption with matching security protocols • Use detection tools inside everyday video & voice workflows • Refresh AI policies and run regular employee training • Stand up task forces to anticipate and mitigate risks The creative potential of Gen AI video is huge — but so is the exposure. 👉 How are you preparing your teams to handle this new deepfake era? #AI #deepfakes #riskmanagement

Explore categories