There’s a pretty good chance that the shocking rate at which AI is advancing is out-pacing your cyber security training, policies and maybe even technologies. Have you addressed the use of AI and deep fakes in your cyber security policies? In a recent and alarming development that seems to have leapt straight from the pages of a science fiction novel, a Hong Kong based finance worker at a multinational firm was defrauded of $25 million, falling victim to an elaborate scam that employed deepfake technology to impersonate the company's CFO. This incident, which unfolded during a video conference call, marks a disturbing milestone in the intersection of cybercrime and AI, underscoring the urgent imperative for companies to bolster their cybersecurity frameworks, particularly against the backdrop of deepfake technology. The mechanics of the scam were deceptively simple yet devastatingly effective. The finance employee was lured into a video call with several participants, believed to be colleagues and the CFO, only to discover later that each participant was a digital fabrication. The deepfake avatars, mirroring the appearance and voice of real company personnel, instructed the employee to initiate a "secret transaction", leading to the unauthorised transfer of $25.6 million. This incident is not an isolated event but rather a harbinger of the potential threats posed by AI-driven disinformation and fraud. The use of deepfake technology to bypass facial recognition software, impersonate individuals for fraudulent purposes, and undermine the integrity of personal and corporate identities presents a clear and present danger. The case in Hong Kong, where fraudsters successfully manipulated digital identities to orchestrate financial theft, exemplifies the sophistication of contemporary cybercrime. The implications of this event extend far beyond the immediate financial loss. It serves as a stark reminder of the vulnerabilities inherent in digital communication platforms and the necessity for robust verification processes. The reliance on video conferencing and digital communication, accelerated by the global pandemic, has exposed systemic weaknesses ripe for exploitation. In response to this escalating threat, it is incumbent upon companies to adopt comprehensive cybersecurity strategies that address the unique challenges posed by deepfake technology. This includes implementing advanced authentication protocols, raising awareness and training employees on the potential risks of deepfakes, and deploying AI-driven security measures capable of detecting and neutralising synthetic media. As AI output become increasingly indistinguishable from reality, the line between authentic and artificial communication will blur, challenging individuals and organisations to navigate a new frontier of digital authenticity. It compels a reevaluation of the assumptions underpinning digital trust and identity verification, urging a proactive approach to cyber defence.
Deepfake Technology Issues
Explore top LinkedIn content from expert professionals.
-
-
This is a significant move in consumer deepfake protection: Chinese smartphone brand Honor introduces native deepfake detection for video calls. Announced last year but globally available from April, Honor claims they can identify suspected synthetic content in live video calls within six seconds. Using continuous frame-by-frame monitoring, Honor's detection analyses discrepancies in "eye contact, lighting, image clarity, and video playback". If suspected synthetic content is detected, users automatically receive a pop-up warning, like anti-virus software or on web browsers when accessing a website without a valid SSL certificate. The anti-virus framing for detection is understandably appealing- a seamless (but not infallible) protective layer between users and content on social media, video calls, or even suspected AI-generated emails. It's encouraging to see big consumer tech companies taking the risk of deepfakes seriously and looking to protect users with this integrated approach, but caveats do still apply: 🔎 It's unclear how the increasing use of filters or other benign synthetic effects may impact the triggering of alerts/detection. 🔎 A reliability benchmark hasn't been shared, nor has any red teaming/robustness testing. As usual, unreliable and unevolving detection often does more harm than good... 🔎 Research is still needed to understand if these notifications are meaningful interventions in a live conversational context. Too many false positives and the 'crying wolf' effect may also feed notification fatigue. Still, I'm confident Honor won't be the last smartphone company to introduce these native detection capabilities. Deepfake fraud numbers have skyrocketed (one study found a 2137% increase in the last three years), and AI-generated content continues to grow more pervasive and sophisticated. I wouldn't be surprised if these features become key product differentiators moving forward, particularly for corporate customers where security is the ultimate priority.
-
AI PR Nightmares Part 2: When AI Clones Voices, Faces, and Authority. What Happened: Last week, a sophisticated AI-driven impersonation targeted White House Chief of Staff Susie Wiles. An unknown actor, using advanced AI-generated voice cloning, began contacting high-profile Republicans and business leaders, posing as Wiles. The impersonator requested sensitive information, including lists of potential presidential pardon candidates and even cash transfers. The messages were convincing enough that some recipients engaged before realizing the deception. Wiles’ personal cellphone contacts were reportedly compromised, giving the impersonator access to a network of influential individuals. This incident underscores a huge growing threat: AI-generated deepfakes are becoming increasingly realistic and accessible, enabling malicious actors to impersonate individuals with frightening accuracy. From cloned voices to authentic looking fabricated videos, the potential for misuse spans politics, finance, and way beyond. And it needs your attention now. 🔍 The Implications for PR and Issues Management: As AI-generated impersonations become more prevalent, organizations must proactively address the associated risks as part of their ongoing crisis planning. Here are key considerations: 1. Implement New Verification Protocols: Establish multi-factor authentication for communications, especially those involving sensitive requests. Encourage stakeholders to verify unusual requests through secondary channels. 2. Educate Constituents: Conduct training sessions to raise awareness about deepfake technologies and the signs of AI-generated impersonations. An informed network is a critical defense. 3. Develop a Deepfakes Crisis Plan: Prepare for potential deepfake incidents with a clear action plan, including communication strategies to address stakeholders and the public promptly. 4. Monitor Digital Channels: Utilize your monitoring tools to detect unauthorized use of your organization’s or executives’ likenesses online. Early detection and action can mitigate damage. 5. Collaborate with Authorities: In the event of an impersonation, work closely with law enforcement and cybersecurity experts to investigate and respond effectively. ———————————————————— The rise of AI-driven impersonations is not a distant threat, it’s a current reality and only going to get worse as the tech becomes more sophisticated. If you want to think and talk more about how to prepare for this and other AI related PR and issues management topics, follow along here with my series or DM if I can help your organization prepare or respond.
-
The New Corporate Threat: Deepfakes That Even Experts Can't Detect Welcome to the new reality where AI doesn’t just generate content, it manufactures convincing lies. You’ve probably seen it: - A CEO announces a fake acquisition. - A politician "says" something they never did. - A voice note "from your boss" requests a fund transfer. It all looks real. But it’s not. It’s a deepfake AI-generated audio, video, or images designed to deceive. Why it matters: Deepfakes are no longer just internet tricks or entertainment. They’re now: - Financial fraud enablers (voice clones used to scam employees) - Corporate risk vectors (fake news impacting stock prices) - Political weapons (manipulated clips used to sway public opinion) - Personal threats (identity misuse, blackmail, defamation) How to spot a deepfake Look for: - Unnatural blinking or awkward lip sync - Plastic skin or weird lighting - Robotic tone or emotionless speech - Out-of-character statements - No credible source backing the video If it feels off, it probably is. What you can do: - Pause before sharing - Use tools like Deep ware, Microsoft Video Authenticator, or Adobe Verify - Train your teams especially PR, legal, and finance - Push for content provenance in your organization In the GenAI era, trust is currency. Don’t spend it on content you didn’t verify. #artificialintelligence
-
What happens when deepfake technology becomes a service anyone can buy? I've been tracking the Deepfakes-as-a-Service market, and the numbers are alarming. Deepfake fraud attempts jumped 1,300% in 2024. From one attack per month to seven per day. Here's what keeps me up at night: The February 2024 Arup case. A finance employee joined a video call with the CFO and several colleagues. Everyone looked real. Everyone sounded real. The employee authorized $25.6 million in wire transfers. Every single person on that call was AI-generated. This wasn't some nation-state operation. Underground marketplaces now offer deepfake creation as a point-and-click service. No technical skills required. Just cryptocurrency and malicious intent. The psychology is what makes it work. We're wired to trust what we see and hear, especially when it matches our expectations. A realistic video of your CFO making a familiar request triggers immediate credibility. By the time you think to question it, the money's gone. Traditional defenses aren't enough anymore: → Voice verification systems can be defeated → Video calls don't guarantee authenticity → Even following verification procedures can fail Organizations need multi-channel verification protocols. If someone requests a wire transfer on video, verify through a completely separate channel. Code words. Challenge-response systems. Procedural friction on high-risk transactions. But here's the problem: 99% of security leaders say they're confident in their deepfake defenses. Only 8.4% actually scored above 80% in detection tests. We think we're protected when we're actually vulnerable. Have you updated your verification procedures for the deepfake era? #Cybersecurity #AISecurity #DeepfakeFraud #DigitalRisk #FraudPrevention
-
🧾 Employees using AI to create fraudulent expense receipts 🤖 Fake or otherwise malicious “candidates” using Deepfake to hide their true identity on remote interviews until they get far enough in the process to hack your data 🎣 AI-powered phishing scams that are more sophisticated than ever Over the past few months, I’ve had to come to terms with the fact that this is our new reality. AI is here, and it is more powerful than ever. And HR professionals who continue to bury their head in the sand or stand by while “enabling” others without actually educating themselves are going to unleash serious risks and oversights across their company. Which means that HR professionals looking to stay on top of the increased risk introduced by AI need to lean into curiosity, education, and intentionality. For the record: I’m not anti-AI. AI has and will continue to help increase output, optimize efficiencies, and free up employees’ time to work on creative and energizing work instead of getting bogged down and burnt out by mind numbing, repetitive, and energy draining work. But it’s not without its risks. AI-powered fraud is real, and as HR professionals, it’s our jobs to educate ourselves — and our employees — on the risks involved and how to mitigate it. Not sure where to start? Consider the following: 📚 Educate yourself on the basics of what AI can do and partner with your broader HR, Legal, and #Compliance teams to create a plan to knowledge share and stay aware of new risks and AI-related cases of fraud, cyber hacking, etc (could be as simple as starting a Slack channel, signing up for a newsletter, subscribing to an AI-focused podcast — you get the point) 📑 Re-evaluate, update, and create new policies as necessary to make sure you’re addressing these new risks and policies around proper and improper AI usage at work (I’ll link our AI policy template below) 🧑💻 Re-evaluate, update, and roll out new trainings as necessary. Your hiring managers need to be aware of the increase in AI-powered candidate fraud we’re seeing across recruitment, how to spot it, and who to inform. Your employees need to know about the increased sophistication of #phishing scams and how to identify and report them For anyone looking for resources to get you started, here are a few I recommend: AI policy template: https://lnkd.in/e-F_A9hW AI training sample: https://lnkd.in/e8txAWjC AI phishing simulators: https://lnkd.in/eiux4QkN What big new scary #AI risks have you been seeing?
-
Microsoft's case against illicit AI developers confirms what we at Reality Defender have tracked for years: deepfake impersonation has evolved from theoretical concern to sophisticated criminal enterprise targeting vulnerable individuals daily and much more frequently than last year. While those of us with good BS detectors (and, yes, inference-based deepfake detection) are able to spot celebrity deepfakes from a mile away, these deceptive creations continue to be remarkably effective at defrauding everyday people. The financial impact is substantial, to say the least, and the aftermath of these scams extends beyond financial loss. Most importantly, when someone transfers retirement savings to a deepfaked "Elon Musk" investment scheme or sends money to an AI-generated "Brad Pitt," the profound shame often prevents victims from reporting these incidents — creating a dangerous gap in our understanding of the true scale of this crisis. What makes this trend particularly concerning is the organizational sophistication behind these operations. We're seeing structured criminal networks with specialized roles: technical developers creating the AI tools, others perfecting impersonation techniques, and frontline operators executing the financial fraud with increasing effectiveness. At Reality Defender, we partner with financial institutions to implement proactive protection against a related threat — deepfake impersonations of legitimate account holders attempting to breach security systems and conduct unauthorized transactions. These attacks threaten both individual finances and institutional reputational integrity, and like the victims of celebrity deepfake impersonations, are far more common than reported. As generative AI technology becomes even more accessible, we remain committed to sharing our insights while respecting victim privacy. Chances are high that your organization faces AI impersonation risks you haven't yet considered. Reality Defender's proactive detection measures can help you identify these vulnerabilities and implement robust safeguards before your customers or employees become victims.
-
Deepfake Dominance in Cybercrime. We’ve crossed a tipping point: 40% of phishing campaigns are now AI-powered. Threat actors are extracting as much as $81,000 from a single victim using deepfake-enhanced tactics. Emails, calls, and even video conferences can now be convincingly AI-generated. This means traditional “spot the red flag” awareness training is no longer enough. Trusting your eyes or ears alone is no longer safe in a world where fraudsters can impersonate anyone. Zero Trust must extend to human identity verification. Confirm unexpected requests for money, credentials, or sensitive data through an out-of-band channel. Layer your controls. MFA, identity verification callbacks, and vendor authentication into daily workflows. Reinforce to employees that hesitation and validation are strengths, not weaknesses. At AdvisorDefense, we’re preparing RIAs for a reality where cybercrime isn’t just about malware, it’s about manipulation. If 40% of phishing is already AI-driven, the question is: how will your firm adapt before the other 60% gets there too? #AdvisorDefense #RIA #Cybersecurity #ZeroTrust
-
Fraud no longer hides in the shadows. It might show up disguised as someone you know. Like when the CEO calls and her voice on the phone sounds exactly right. Her urgency feels real, and the wire transfer request to a new bank account seems legitimate, so accounting releases the funds. And just like that, the company loses $20k to a fraudster who weaponized AI. This isn't science fiction. It's happening right now to individuals and organizations alike. Fraudsters are creating disturbingly real AI deepfakes that can fool even the most cautious people. And companies need strategies to combat them. Because those audio and visual cues we've relied on for decades are no longer reliable indicators of authenticity when it comes to AI deepfakes. Organizations can fight back with these defense strategies: ✔ Stay cautious and be wary of anyone requesting money or personal information, even if they look or sound like someone you trust. ✔ Don’t send money or share sensitive data in response to a single phone or video call. Phone numbers can be spoofed, so always verify a person’s identity by contacting them separately at a number you trust. ✔ Use small action requests, like asking a person to turn their head, blink repeatedly, or hum a song while on a video or phone call. If they decline, freeze up, or go silent, it could be a fraudster. ✔ Establish a safe word that only your inner circle knows to confirm the identity of someone claiming to be a colleague, family member, or friend. ✔ Use strong passwords. Enable multifactor authentication (MFA) on all company devices and accounts whenever possible. And don’t forget to report AI deepfakes to law enforcement and any relevant social media channels, websites, and other platforms where the encounter took place. All of these tips ALSO work for individuals too because hackers like causing havoc with anyone they can. The question isn't whether AI deepfakes will target your organization. It's whether your organization will be ready when it does. Food for thought as we kick off Cybersecurity Awareness Month. ♻ Share our infographic to help companies combat AI deepfakes.
-
Deepfakes aren’t a tech story. they’re a trust story A few days ago, a doctor in Hyderabad lost money to a #deepfake video that showed a cabinet minister “endorsing” an investment scheme on #Instagram. If that sounds distant, it isn’t. This is the new fraud funnel: authority, urgency, proof… all manufactured at scale. As #communicators and leaders, we can’t outsource this to compliance or IT. #Trust is now an operational KPI. What we as communicators need to do? • Treat digital hygiene like fire safety. Run quarterly drills that teach people how fakes travel and how to report them • Publish an authenticity sheet. List official handles, verified domains, escalation numbers and a simple “how to verify” flow for customers and employees • Watermark outbound content and adopt content credentials where possible. Make the real easier to prove than the fake is to spread. • Rewrite influencer and media contracts with an “authenticity clause” and takedown SLAs. If your face or footage is misused, minutes matter. • Stand up a rapid debunk protocol. Pre-approved copy, visuals, spokespeople and a single public link that carries all corrections. • Close the platform loop. Nominate a trust lead who keeps warm lines with platform policy teams so your takedown requests don’t start cold. Silence helps the scammer. Clarity helps the vulnerable. What would you add to this deepfake playbook? If you’ve seen a convincing fake lately, share it below and let’s decode why it worked. #digitalsafety #misinformation #brandprotection #reputationmanagement #contentauthenticity #aiethics #factchecking #onlinescams #communications
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development