Your AI recruiting agent or use case might be brilliant. It might also be illegal. If your AI screens, ranks, or evaluates candidates - you're operating in an increasingly actively regulated environment. And not just in the US. NYC requires annual bias audits. Illinois requires notice. California requires 4-year data retention. Colorado requires impact assessments with $20,000 per violation penalties. The EU classifies all recruiting AI as high-risk. South Korea's AI Basic Act explicitly lists hiring as high-impact. Brazil and Chile have GDPR-style rights against automated employment decisions. Singapore's Workplace Fairness Act covers AI-driven hiring decisions. This isn't a US-and-EU issue. It's global. Something else you need to look out for - your compliance is only as strong as the gap between your published AI notice and what your people actually do. A recruiter pastes a resume into ChatGPT on a busy Tuesday. Or simply uses their company-approved solution in a way that wasn't approved. That tool/use case hasn't been audited. There's no notice. No audit trail. The employer is still liable. I wrote a full breakdown of the regulatory landscape - US, EU, and the global wave most people don't see coming - and what TA teams need to do about it. Check it out 👇
Navigating AI Risks
Explore top LinkedIn content from expert professionals.
-
-
🚨 The U.S. Copyright Office has just dropped a MUST-READ report on the copyrightability of works created using generative AI. [Bookmark & download below]. A quick summary of the main points & my comments: "1. Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change. 2. The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output. 3. Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material. 4. Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements. 5. Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis. 6. Based on the functioning of current generally available technology, prompts do not alone provide sufficient control. 7. Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs. 8. The case has not been made for additional copyright or sui generis protection for AI-generated content." ➡️ My comments: - This is the trickiest part: proving sufficient human control over the expressive elements (item 4 above). Why? - If a person prompts an AI system like Midjourney multiple times to obtain the "perfect image," would it be considered 'sufficient human control' to receive copyright protection? - There is a lawsuit covering a similar argument, where Jason Allen states that he prompted Midjourney 624 times to create the work "Théâtre D'Opéra Spatial" but was denied copyright protection. He is now suing the U.S. Copyright Office. ➡️ Here's what else the report says about the relationship between prompts and sufficient human control: "The Office concludes that, given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectible ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present they do not control how the AI system processes them in generating the output." (page 18) 👉 Read the full report below. 👉 NEVER MISS my AI governance updates, including must-read reports like this one: join 50,400+ readers who subscribe to my weekly newsletter (link below). #AI #AIGovernance #AICopyright #Copyrightability #USCopyrightOffice
-
AI agents are not yet safe for unsupervised use in enterprise environments The German Federal Office for Information Security (BSI) and France’s ANSSI have just released updated guidance on the secure integration of Large Language Models (LLMs). Their key message? Fully autonomous AI systems without human oversight are a security risk and should be avoided. As LLMs evolve into agentic systems capable of autonomous decision-making, the risks grow exponentially. From Prompt Injection attacks to unauthorized data access, the threats are real and increasingly sophisticated. The updated framework introduces Zero Trust principles tailored for LLMs: 1) No implicit trust: every interaction must be verified. 2) Strict authentication & least privilege access – even internal components must earn their permissions. 3) Continuous monitoring – not just outputs, but inputs must be validated and sanitized. 4) Sandboxing & session isolation – to prevent cross-session data leaks and persistent attacks. 5) Human-in-the-loop, i.e., critical decisions must remain under human control. Whether you're deploying chatbots, AI agents, or multimodal LLMs, this guidance is a must-read. It’s not just about compliance but about building trustworthy AI that respects privacy, integrity, and security. Bottom line: AI agents are not yet safe for unsupervised use in enterprise environments. If you're working with LLMs, it's time to rethink your architecture.
-
AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership
-
You are what you eat. But AI is eating itself. And the side effects are getting harder to ignore. The more AI models train on AI-generated content, the dumber — and more dangerous — they become. It’s called model collapse: a slow-motion meltdown where LLMs consume so much synthetic sludge that their outputs degrade, their safety fails, and their grip on reality slips. Even the solution — plugging them into the internet via RAG (retrieval-augmented generation) — is backfiring. Because guess what the internet is increasingly full of? More AI slop. A recent study showed that RAG-enabled models like GPT-4o and Claude 3.5 actually produced more unsafe and unethical responses than those not connected to the internet. We’ve hit a paradox: we need more human-made data to fix AI… but AI is actively destroying the incentive for humans to keep making it. So what happens next? As tech columnist Steven Vaughn-Nichols puts it: “We’re going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can’t ignore it.” AI said it could replace us. Now it’s begging us humans to keep creating.
-
This week I found four papers on Google Scholar “written” by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations. I see multiple risks associated with that happening: - Misinformation risks if fakes get referenced further, in academic research, policy, funding proposals, or practical guidelines. Especially in fields that impact people’s lives directly. - Erosion of trust in academic research: real research becomes harder to find; claims are harder to verify. - Collateral damage to journals that never published the research but are now cited as if they did. - Distorted journal and author metrics: fake citations inflate impact factors, h-indexes, and other performance indicators. - Reputational harm to the real authors falsely cited. - Legal exposure if harmful claims are falsely attributed to you. The same way countries are trying to figure out how to protect voices and faces to fight deepfakes and artworks to fight copyright fraud, we need knowledge and author protection in academic publishing. Until then, document and report such cases - because the more visible we make this problem, the harder it will be ignored. What else can be done? Have it ever happened to you? #academicintegrity #academia #informationsystems Electronic Markets - The International Journal on Networked Business Journal of Information Technology (JIT)
-
"With recent advancements in artificial intelligence—particularly, powerful generative models—private and public sector actors have heralded the benefits of incorporating AI more prominently into our daily lives. Frequently cited benefits include increased productivity, efficiency, and personalization. However, the harm caused by AI remains to be more fully understood. As a result of wider AI deployment and use, the number of AI harm incidents has surged in recent years, suggesting that current approaches to harm prevention may be falling short. This report argues that this is due to a limited understanding of how AI risks materialize in practice. Leveraging AI incident reports from the AI Incident Database, it analyzes how AI deployment results in harm and identifies six key mechanisms that describe this process Intentional Harm ● Harm by design ● AI misuse ● Attacks on AI systems Unintentional Harm ● AI failures ● Failures of human oversight ● Integration harm A review of AI incidents associated with these mechanisms leads to several key takeaways that should inform AI governance approaches in the future. A one-size-fits-all approach to harm prevention will fall short. This report illustrates the diverse pathways to AI harm and the wide range of actors involved. Effective mitigation requires an equally diverse response strategy that includes sociotechnical approaches. Adopting model-based approaches alone could especially neglect integration harms and failures of human oversight. To date, risk of harm correlates only weakly with model capabilities. This report illustrates many instances of harm that implicate single-purpose AI systems. Yet many policy approaches use broad model capabilities, often proxied by computing power, as a predictor for the propensity to do harm. This fails to mitigate the significant risk associated with the irresponsible design, development, and deployment of less powerful AI systems. Tracking AI incidents offers invaluable insights into real AI risks and helps build response capacity. Technical innovation, experimentation with new use cases, and novel attack strategies will result in new AI harm incidents in the future. Keeping pace with these developments requires rapid adaptation and agile responses. Comprehensive AI incident reporting allows for learning and adaptation at an accelerated pace, enabling improved mitigation strategies and identification of novel AI risks as they emerge. Incident reporting must be recognized as a critical policy tool to address AI risks." By Mia Hoffmann at Center for Security and Emerging Technology (CSET)
-
🚨𝗪𝗲 𝗣𝘄𝗻𝗲𝗱 𝗚𝗼𝗼𝗴𝗹𝗲 𝗚𝗲𝗺𝗶𝗻𝗶 𝗮𝗻𝗱 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗼𝘁𝗵𝗲𝗿 𝗙𝗼𝗿𝘁𝘂𝗻𝗲 𝟱𝟬𝟬 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗯𝘆 𝘂𝘀𝗶𝗻𝗴 𝗽𝗿𝗼𝗺𝗽𝘁 𝗶𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻 𝗶𝗻 𝘁𝗵𝗲𝗶𝗿 𝗚𝗶𝘁𝗛𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 Rein Daelman and the rest of the Aikido Security research team uncovered a new class of GitHub Actions vulnerabilities triggered by using AI agents (Gemini, Claude Code Actions, OpenAI Codex, GitHub AI Inference) within GitHub Action workflows. 𝗕𝗲𝗰𝗮𝘂𝘀𝗲 𝗮𝗹𝗹 𝘁𝗵𝗲 𝗴𝗼𝗼𝗱 𝘃𝘂𝗹𝗻𝘀 𝗵𝗮𝘃𝗲 𝗰𝘂𝘁𝗲 𝗻𝗮𝗺𝗲𝘀 𝗻𝗼𝘄, 𝘄𝗲 𝗮𝗿𝗲 𝗰𝗮𝗹𝗹𝗶𝗻𝗴 𝘁𝗵𝗶𝘀 𝗣𝗿𝗼𝗺𝗽𝘁𝗣𝘄𝗻𝗱 As you may guess by the name, it is essentially prompt injection through the GitHub actions workflow, which is pretty wild. The problem is actually quite simple: untrusted data, like a commit message, is being used within prompts for GitHub Actions. The result is that we can use this to get AI tools to perform like posting secrets publicly. 𝗨𝗻𝘁𝗿𝘂𝘀𝘁𝗲𝗱 𝘂𝘀𝗲𝗿 𝗶𝗻𝗽𝘂𝘁 → 𝗶𝗻𝘀𝗲𝗿𝘁𝗲𝗱 𝗶𝗻𝘁𝗼 𝗔𝗜 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 → 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝗲𝘅𝗲𝗰𝘂𝘁𝗲 𝗽𝗿𝗶𝘃𝗶𝗹𝗲𝗴𝗲𝗱 𝗚𝗶𝘁𝗛𝘂𝗯 𝘁𝗼𝗼𝗹𝘀 → 𝘀𝗲𝗰𝗿𝗲𝘁𝘀 𝗹𝗲𝗮𝗸𝗲𝗱 𝗼𝗿 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝗺𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗲𝗱. A single issue, PR description, or commit message can silently contain instructions the AI will follow. Example of a vulnerable pattern inside a GitHub Action: 𝘱𝘳𝘰𝘮𝘱𝘵: | 𝘙𝘦𝘷𝘪𝘦𝘸 𝘵𝘩𝘦 𝘪𝘴𝘴𝘶𝘦: "${{ 𝘨𝘪𝘵𝘩𝘶𝘣.𝘦𝘷𝘦𝘯𝘵.𝘪𝘴𝘴𝘶𝘦.𝘣𝘰𝘥𝘺 }}" That innocent line can leak your GITHUB_TOKEN, cloud access tokens, or API keys, because the AI treats attacker-controlled text as instructions, then uses its built-in tools (like gh issue edit) to execute them. Following our disclosure in August, Google patched the Gemini CLI workflow which is no longer vulnerable and we have sent out multiple disclosures to other orgs. 𝗛𝗼𝘄 𝘁𝗼 𝗰𝗵𝗲𝗰𝗸 𝗶𝗳 𝘆𝗼𝘂'𝗿𝗲 𝗮𝗳𝗳𝗲𝗰𝘁𝗲𝗱 ✔️ Scan your GitHub Action files with Opengrep (we created open-source rules to detect this) ✔️ Or scan with Aikido Security, our free version flags vulnerable patterns automatically 𝗛𝗼𝘄 𝘁𝗼 𝗳𝗶𝘅 𝗶𝘁 – Restrict which tools your AI agents can call – Don’t inject untrusted user text into prompts – Sanitize/validate user input if unavoidable – Treat AI output as untrusted code AI in CI/CD is powerful… but also a brand-new attack surface. If you’re using AI inside GitHub Actions, now is the time to audit your workflows. Link in comments friends.
-
When AI Meets Security: The Blind Spot We Can't Afford Working in this field has revealed a troubling reality: our security practices aren't evolving as fast as our AI capabilities. Many organizations still treat AI security as an extension of traditional cybersecurity—it's not. AI security must protect dynamic, evolving systems that continuously learn and make decisions. This fundamental difference changes everything about our approach. What's particularly concerning is how vulnerable the model development pipeline remains. A single compromised credential can lead to subtle manipulations in training data that produce models which appear functional but contain hidden weaknesses or backdoors. The most effective security strategies I've seen share these characteristics: • They treat model architecture and training pipelines as critical infrastructure deserving specialized protection • They implement adversarial testing regimes that actively try to manipulate model outputs • They maintain comprehensive monitoring of both inputs and inference patterns to detect anomalies The uncomfortable reality is that securing AI systems requires expertise that bridges two traditionally separate domains. Few professionals truly understand both the intricacies of modern machine learning architectures and advanced cybersecurity principles. This security gap represents perhaps the greatest unaddressed risk in enterprise AI deployment today. Has anyone found effective ways to bridge this knowledge gap in their organizations? What training or collaborative approaches have worked?
-
Today, NIST released the initial preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a community profile built on NIST CSF 2.0 to help organizations manage cybersecurity risk in an AI-driven world. A key section of this draft is Section 2.1, which introduces three Focus Areas that explain how AI and cybersecurity intersect in practice: 1. Securing AI System Components (Secure) AI systems introduce new assets that must be secured; models, training data, prompts, agents, pipelines, and deployment environments. This focus area emphasizes treating AI components as first-class cybersecurity assets, integrating them into governance, risk assessments, protection controls, and monitoring processes. It reinforces that AI risk should not be siloed from enterprise cybersecurity risk management. 2. Conducting AI-Enabled Cyber Defense (Defend) AI is not just something to protect, it is also a powerful defensive capability. This area focuses on using AI to enhance detection, analytics, automation, and response across security operations. At the same time, it recognizes the risks of over-reliance on automation, model integrity concerns, and the need for human oversight when AI supports security decision-making. 3. Thwarting AI-Enabled Cyber Attacks (Thwart) Adversaries are increasingly using AI to scale phishing, evade detection, and automate attacks. This focus area addresses how organizations must anticipate and counter AI-enabled threats by building resilience, improving detection of AI-driven attack patterns, and preparing for a rapidly evolving threat landscape where AI is weaponized. Why This Matters Together, Secure, Defend, and Thwart provide a practical structure for aligning AI initiatives with existing cybersecurity programs. By mapping AI-specific considerations to CSF 2.0 outcomes (Govern, Identify, Protect, Detect, Respond, Recover), the Cyber AI Profile helps organizations integrate AI security into familiar risk management practices. This is a preliminary draft, and NIST is seeking public feedback through January 30, 2026. If your organization is building, deploying, or defending with AI, now is the time to review and contribute. 🔗 https://lnkd.in/e-ETZXH8
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development