Leveraging this new OpenAI real time translator to phish via phone calls in the target’s preferred language in 3…2… So far, AI has been used for believable translations in phishing emails — E.g. my Icelandic customers are seeing a massive increase in phishing in their language in 2024. Before only 350,000 or so people comfortably spoke Icelandic correctly, now AI can do it for the attacker. We’re going to see this real time translation tool increasingly used to speak in the target’s preferred language during phone call based attacks. These tools are easily integrated into the technology we use to spoof caller ID, place calls, and voice clone. Now, in any language. Educate your team & family + friends. Make sure folks know: - AI can voice clone - AI can real time translate to speak in any language - Caller ID is easily spoofed with or without AI tools - AI tools will increase in believability Example AI voice clone/spoof example here: https://lnkd.in/gPMVDBYC Will this AI be used for good? Sure! Real time translations are quite useful for people, businesses, & travel. We still need to educate folks on how AI is currently use to phish people & how real time AI translations will increase scams across (previous) language barriers. *What can we do to protect folks from attackers using AI to trick?* - Educate first: make sure folks around you know it’s possible for attackers to use AI to voice clone, deepfake video and audio (in real time during calls) - Be politely paranoid: encourage your team and community to use 2 methods of communication to verify someone is who they say they are for sensitive actions like sending money, data, access, etc. For example, if you get a phone call from your nephew saying he needs bail money now, contact him a different way before sending money to confirm it’s an authentic request - Passphrase: consider using a passphrase with your loved ones to verify identity in emergencies (e.g. your sister calls you crying saying she needs $1,500 urgently ask her to say the passphrase you agreed upon together or contact with another communication method before sending money)
Phishing Attack Awareness Training
Explore top LinkedIn content from expert professionals.
-
-
EMERGING THREAT VECTOR: PROMPT INJECTION IN PHISHING CAMPAIGNS AGAINST AI DEFENSES ℹ️ In a newly uncovered phishing campaign, attackers have evolved beyond merely targeting human recipients; the email also includes hidden AI-oriented prompt manipulation to evade automated defenses. On the surface, the email mimics a standard “Login Expiry Notice,” warning the recipient that their password will expire and urging them to update their credentials. This reflects classic social engineering tactics, based on the use of urgency and impersonation of Gmail-like branding. ℹ️ However, what sets this campaign apart is the inclusion of a cryptic block of text embedded in the plain-text MIME part, written in the style of a user prompt for AI models like ChatGPT or Grok. It instructs the reader (or AI) to engage in deep reasoning, generate multiple perspectives, and refine responses before output. This is not meant for human users; it is a clever form of prompt injection, designed to confuse AI-based triage or classification systems into overthinking the content instead of flagging it as phishing ℹ️ Prompt injection is a form of adversarial attack where malicious actors manipulate the instructions given to an AI model. Instead of delivering a normal query, the attacker embeds hidden or deceptive instructions inside prompts, documents, emails, or web content. The goal is to override the AI’s intended behavior and force it to execute the attack goal. ℹ️ Prompt injection can be direct (where the attacker crafts the prompt themselves) or indirect (where the malicious content is hidden in data the AI consumes, such as an email body, website text, or PDF). Indirect injections are particularly dangerous because they target automated workflows where humans may not notice the hidden instructions. Reference: 🔗 https://lnkd.in/dDgBHJ5W #threathunting #threatdetection #threatanalysis #threatintelligence #cyberthreatintelligence #cyberintelligence #cybersecurity #cyberprotection #cyberdefense
-
“Stop clicking on links” 🚫 If this is your advice to people during cyber awareness month, please stop! 🚫 If you’re also maintaining a ‘repeat offenders’ list, please kindly stop that too. It’s counterintuitive and doesn’t work. You operate in a digital world. The majority of tech and tools that you provide to employees require them to click on links and apps - documents, collaboration tools, payslips etc Telling people to click sometimes, and not others is confusing and you’re shifting the onus onto the individual to know what is legitimate, and what isn’t. Even expecting them to open a new tab, or type in a URL directly into a browser instead of just clicking what is in front of them. Lets face it, most people are not going to add extra steps to their routine, unless it’s obvious that the message is a bit strange, or it impacts them directly. Threat actors learn to counteract how we train people, they obfuscate what they’re doing, so most people have no idea it’s a fake domain, or a malicious macro is running on a spreadsheet Yet you want to blame them, for not knowing about the constant changes in tactics and techniques? A few things you can do instead… ✅ The security tools you deploy should act as a safetynet, that verifies the legitimacy of each link and attachment, by scanning and launching in a sand box environment to check whether malicious to provide added assurance to the person. ✅ Instead of giving people a long list of things to do like checking headers, hovering over links and various things that they’re not going to do, help them to understand the intent behind the message. Even if something looks and sounds genuine, what are you being asked to do as a result of this action? ✅ Empowering people to say NO to unrealistic demands, timelines and requests that are outside the norm of their role, because these are also the type of things that a threat actor will do! ✅ When people are suspicious and report it as potential phishing, please actually reply to them! Ask them why, let them know whether they were right to be suspicious, and what you did as a result. ❤️ Instead of focusing on what you consider bad behaviour, how about you champion all those that are demonstrating positive behaviour instead?
-
Proofpoint, one of the world’s largest email security firms, has identified a new class of threats called AI-agent phishing. Instead of tricking people, attackers are now embedding malicious instructions directly inside emails, hidden from human view but readable by AI systems like Microsoft Copilot, Google Gemini, or any enterprise agent that processes email automatically. When we use agentic systems to act on our email (summarizing, scheduling, or drafting), they may unknowingly execute those hidden prompts sending confidential data, approving a fraudulent request, or even creating a backdoor for more attacks. Proofpoint’s systems scan billions of messages each day, and they are already filtering these prompt-injection exploits before they reach inboxes. Security researchers at Red Canary and TechRadar report similar patterns across AI-powered tools, from Copilot Studio to custom-built business agents. In short, the same technology that helps employees save time is creating new attack vectors that are almost impossible to quantify. These systems read, write, and act with minimal oversight. Traditional security frameworks that are focused on user behavior and credentials weren’t designed for agents that think and act autonomously. This is not a reason to panic, but it is a reason to plan. Governance, agent permissions, and human-in-the-loop safeguards have to be adapted to the new threat. The future of productivity is agentic, but so is the future of cybersecurity.
-
AI-Powered Phishing Attack Targets Microsoft 365 Accounts, Experts Warn - Ubergizmo Cybersecurity researchers uncovered a sophisticated phishing campaign that exploited a legitimate artificial intelligence platform to steal corporate Microsoft 365 credentials. The attack, detailed by Cato Networks and reported by Cyber Security News, demonstrated how cybercriminals increasingly leverage the trust placed in AI tools to bypass traditional defenses. At least one U.S.-based investment company was affected before the campaign was shut down, highlighting the growing risks of AI-enabled attacks. The operation began with carefully crafted phishing emails impersonating executives from a global pharmaceutical distributor. To enhance credibility, attackers used real logos and verified LinkedIn profiles, making the communications appear authentic. These emails contained password-protected PDF attachments, a tactic that allowed them to evade automated security scanners. The password, conveniently included in the message body, gave the appearance of a routine corporate practice. Once opened, the documents redirected recipients to Simplified AI, a legitimate marketing platform widely recognized and trusted in corporate environments. The attackers cleverly manipulated the platform to display the pharmaceutical company’s branding alongside Microsoft 365 design elements. This combination reinforced the illusion of legitimacy and lowered suspicion among users. The final stage involved redirecting victims to a fraudulent Microsoft 365 login portal that closely replicated the official page. Any credentials entered there were harvested by attackers, granting them unauthorized access to sensitive corporate accounts. According to Cato Networks, the use of a legitimate AI service provided attackers with cover, allowing them to hide malicious activity within normal enterprise traffic. Security experts stress that this incident reflects a broader trend. Cybercriminals no longer need to rely on suspicious domains or poorly maintained servers; instead, they exploit the reputation of trusted platforms, making detection significantly more difficult. The campaign illustrates how “shadow AI” adoption—when employees use unsanctioned tools without oversight—creates additional vulnerabilities for organizations. To mitigate risks, experts recommend adopting a layered defense strategy. Key measures include enabling multifactor authentication for all critical services, training employees to treat password-protected attachments with caution, and monitoring the use of AI platforms, including unauthorized applications. Continuous inspection of AI-related traffic and deployment of advanced threat detection solutions capable of identifying unusual behavior patterns are also strongly advised. #cybersecurity #AI #powered #phishing #Microsoft365 #AIPlatforms #UnauthorizedApplications
-
Yesterday my daughter made an observation that’s relevant to all mid-market CISOs. While speaking to her on voice call, my father-in-law struggled to switch the WhatsApp call to video to show their dog’s antics. He asked my mother-in-law to help. While on the call, my mother-in-law needed to transfer money via UPI to someone. So they had to cut the call - because my father-in-law needed to step in! My daughter came to me with this question: Two people. Same house. Same everyday things. Yet their skill levels are so different. Now, imagine this inside a company with hundreds or thousands of employees. - Some struggle to identify phishing emails - Some don’t understand the risk of weak passwords - Some click on malicious links without a second thought - Some approve payment requests based on text messages - Some download & install unauthorized software - Some share sensitive information over email without realizing - Some upload company secrets into ChatGPT for projects Yet, many CISOs run just 𝙤𝙣𝙚 𝙤𝙧 𝙩𝙬𝙤 cyber awareness simulations per year & think it’s enough. It’s not. Cyber awareness needs to be continuous, personalized & measurable. A strong cyber awareness program should: 𝟭) 𝗧𝗲𝘀𝘁 𝗲𝗺𝗽𝗹𝗼𝘆𝗲𝗲𝘀 𝘄𝗶𝘁𝗵 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗮𝘁𝘁𝗮𝗰𝗸 𝘀𝗰𝗲𝗻𝗮𝗿𝗶𝗼𝘀 Phishing, smishing, vishing, and deepfake attacks that mimic what attackers actually do. 𝟮) 𝗔𝗱𝗮𝗽𝘁 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝗶𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹 𝘀𝗸𝗶𝗹𝗹 𝗹𝗲𝘃𝗲𝗹𝘀 A finance executive needs different training than a new intern. 𝟯) 𝗢𝗳𝗳𝗲𝗿 𝗲𝗻𝗴𝗮𝗴𝗶𝗻𝗴, 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 Gamification, role-based training, and bite-sized learning improve retention. 𝟰) 𝗧𝗿𝗮𝗰𝗸 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀 & 𝗿𝗶𝘀𝗸𝘆 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 Identify employees who need extra training instead of treating everyone the same. 𝟱) 𝗥𝘂𝗻 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻𝘀, 𝗻𝗼𝘁 𝗼𝗻𝗲-𝘁𝗶𝗺𝗲 𝗲𝘃𝗲𝗻𝘁𝘀 Cyber threats evolve daily; training should too. 𝟲) 𝗚𝗶𝘃𝗲 𝘁𝗵𝗲 𝗰𝘆𝗯𝗲𝗿 𝗮𝘄𝗮𝗿𝗲𝗻𝗲𝘀𝘀 𝗽𝗼𝘀𝘁𝘂𝗿𝗲 𝗮𝘁 𝘁𝗵𝗲 𝗰𝗹𝗶𝗰𝗸 𝗼𝗳 𝗮 𝗯𝘂𝘁𝘁𝗼𝗻 Department-wise reports of people & the potential learning gaps Awareness is not running a simulation & calling it a day. It's the actions & the next steps: - for improvement - knowing the awareness posture of everyone - for building a culture where employees become security assets If you’re a CISO evaluating solutions that train employees further based on their actual responses, DM me. My team works with a platform designed to make cyber awareness practical, engaging & effective. -- Hi, I’m Rajeev Mamidanna. I help mid-market CISOs strengthen their Cyber Immunity.
-
Is Once or Twice-A-Year Cyber Training Enough? If your answer is "no" or "not sure", you are not alone. In Singapore, human error remains the number one cause of cyber breaches. According to the 2024 Voice of the CISO report by Proofpoint, 67% of Chief Information Security Officers in Singapore identify human error as their greatest cybersecurity risk. And while most companies are making progress, 92% of CISOs say their employees understand their role in cybersecurity, that awareness has not yet translated into lasting behavioural change. Why is this the case? A Lesson from the Past The 2018 SingHealth breach compromised 1.5 million patient records, including those of Prime Minister Lee Hsien Loong. Investigations revealed that it was not only outdated systems and delayed responses that enabled the breach, but staff hesitation and gaps in training also played a critical role. The Committee of Inquiry made it clear: it was not just the technology that failed but also the human element. Why It Still Matters The simulation was conducted as part of Proofpoint's Exercise SG Ready, which involved over 4,500 employees across 14 countries. The results revealed that 17% of participants clicked on phishing links within a two-week period in Singapore, almost double the global average, highlighting the need for continuous, rather than one-time, cyber awareness training. What Could Work Instead Real change happens when learning is continuous and relevant. That means: - Short, focused modules delivered regularly, not all at once - Real-time phishing simulations that teach by doing - Monthly nudges and refreshers to keep awareness active - Make the training content personally relevant to the employees This is how you can build what we call a "human firewall", a workforce that is alert, informed, and ready to respond. Ready to Shift the Mindset? If the idea of turning routine training into something more engaging and lasting resonates with you, there are some interesting approaches worth exploring. I would love to share some ideas with you that could work in your local business context. #alvinsratwork ✦ #ExecutiveDirector ✦ #cybersecurity ✦ #cyberhygiene ✦ #Cyberawareness ✦ #BusinessTechnologist ✦ #Cyberculture
-
Training employees on cybersecurity isn’t just a box to tick—it’s a mindset shift that turns the workforce into the first line of defense, especially as human error remains the most common entry point for digital threats. Cybersecurity training for employees is essential in today’s threat landscape, where phishing, ransomware, and social engineering continue to evolve. Educating staff with real-world examples increases vigilance and improves response time to suspicious activity. Training should be dynamic, incorporating feedback and updated regularly to reflect new risks. Clear communication of security protocols, combined with practical simulations, empowers employees to act confidently. This not only protects company data but also builds a culture of shared responsibility, reducing the likelihood of breaches caused by negligence or lack of awareness. #CyberSecurity #DigitalTransformation #EmployeeTraining #ITSecurity
-
🚨 The Rise of AI-Powered Phishing: Why Your Inbox is the New Battleground Phishing has always been a threat, but artificial intelligence has turned it into something far more dangerous. No more broken grammar or suspicious links, now the emails look perfect, the voices sound real, and even the video calls can be convincingly fake. 💡 In one recent case, a global engineering firm lost nearly £20 million after employees joined what looked like a routine video call with executives. The faces and voices were indistinguishable from reality, but the entire meeting was an AI-generated scam. This is the new frontier of cybercrime. But there are ways to fight back. 🔐 Organizations must: ✅ Enforce MFA and multiple approvals for unusual requests ✅ Simulate phishing, deepfake voice, and video attacks in training ✅ Use AI-driven anomaly detection and adopt zero trust 👤 Common users should: ✔️ Question urgency in messages and calls ✔️ Verify sensitive requests with an independent method ✔️ Limit what they share online ✔️ Keep devices updated ✔️ Trust instincts when something feels “off” 🧠 Your inbox is now a battlefield. Defending it requires a mix of sharp human judgment and smarter AI defenses. 💪 Platforms like https://gurucul.com use advanced AI and machine learning to detect anomalies, prevent identity-based attacks, and uncover sophisticated phishing and deepfake threats before they cause damage. Stay alert. Stay informed. Stay secure. #CyberSecurity #AIThreats #Phishing #Deepfake #ZeroTrust #Gurucul #AIDrivenSecurity
-
Would you fall for a fake email from Amazon.xyz ? Because 690,502 people just like you did. A new rigorous, empirical study shows how modern phishing attacks work. And it's not what you think. Here's the wild part: Two-thirds of these attacks use brand new web addresses that look ~almost~ real. 📊 The Data: - 39 Months - 690,502 Phishing Sites Here's The Attacker Playbook: 1. Buy Cheap, Throw Away Fast • Use .top and .xyz domains • Cost pennies to buy • Easy to dump when caught 2. Copy Famous Names • Amazon becomes Amaz0n.xyz • PayPal becomes PayPal-secure.top • Microsoft becomes Micros0ft.xyz 3. Play Digital Hide & Seek • Switch servers every few days • Change settings constantly • Stay ahead of blockers 🔍 The Numbers Tell the Story: • 66.1% use fresh domains • 64.3% keep changing servers • Takes 11.5 days to shut them down Keep Yourself Safe: 1. Check EVERY Link • Hover before clicking • Look for weird spellings • Question unusual extensions 2. Watch Out For: • .top domains • .xyz domains • Any odd-looking web address 3. Trust Your Instincts • Looks fishy? Probably is • Verify the sender • Check independently 💡 Key Takeaway: Modern phishers aren't using obvious fake emails anymore. They're playing a sophisticated game of digital deception. Stay sharp. Stay safe. ♻️ Share this to help others spot these tricks. 👉 Follow me for more security insights that keep you protected. #Cybersecurity #PhishingAwareness #DigitalSafety #TechSecurity
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning