LinkedIn just responded to the bias claims. They think they refuted my research. I believe they just confirmed it. Following the recent discussions on whether the algorithm suppresses women's voices, LinkedIn's Head of Responsible AI and AI Governance, Sakshi Jain, posted a new Engineering Blog post to "clarify" how the feed works (link in comments). I’ve analysed the post. Far from debunking the issue, it inadvertently confirms the exact mechanism of Proxy Bias I identified in my report (link in comments). Here is the breakdown: 1. The blog spends most of its time denying that the algorithm uses "gender" as a variable. And I agree. My report never claimed the code contained if gender == female. That would be Direct Discrimination. I have always argued this is about Indirect Discrimination via proxies. 2. Crucially, the blog explicitly lists the signals they do optimise for: "position," "industry," and "activity." These are the exact proxies my report flagged. -> Industry/Position: Men are historically overrepresented in high-visibility industries (Tech/Finance) and senior roles. Optimising for these signals without a fairness constraint systematically amplifies men. -> Activity: The (now-viral) trend of women rewriting profiles in "male-coded" language (and seeing 3-figure percentage lift) proves that the algorithm’s "activity" signal favours male linguistic patterns ("agentic" vs. "communal"). 3. The blog confirms the algorithm is neutral in intent (it doesn't see gender) but discriminatory in outcome (because it optimises for biased proxies). In the UK, this is the textbook definition of Indirect Discrimination under the Equality Act 2010. In the EU, this is a Systemic Risk under the Digital Services Act (DSA). LinkedIn has proven that they can fix this. Their Recruiter product uses "fairness-aware ranking" to mitigate these exact proxies (likely for AI Act compliance). The question remains: Why is that same fairness framework not being applied to the public feed? 👉 What We Are Doing About It Analysis is important, but action is essential. I am proud to support the new petition, "Calling for Fair Visibility for All on LinkedIn". This isn't just a complaint; it’s a demand for transparency. We are calling for an independent equity audit of the algorithm and a clear mechanism to report unexplained visibility collapse. If you are tired of guessing which "proxy" you tripped over today, join us and sign the petition (link in the comments).
AI Bias Issues
Explore top LinkedIn content from expert professionals.
-
-
I came across research last week that I genuinely cannot stop thinking about. In the logic of AI, "man" is to "programmer" as "woman" is to "homemaker." No one explicitly coded that bias into the system; the machines simply learned it from us. They mirrored our job postings, our articles, and our casual conversations and billions of our own blind spots fed into a black box until the algorithm started reflecting our worst habits back at us. Bias in AI isn't always malicious. But sometimes it feels like AI is being weaponized against women's safety at a scale. On platforms like X, a woman posts a photo and the replies are filled with prompts for AI tools to undress her (see the links in comments).These tools then publicly generate explicit, non-consensual images of real women who are students, mothers, leaders. We want to use AI. We must use AI but thoughtfully. And the information it is sharing is just a mere unfortunate reflection of our society. A society where women have fought their way up as they have been historically been reduced, objectified, and pushed to the margins but now those patterns are being encoded into new systems. When a tool can be used to violate a woman's dignity in seconds, that's a design and policy failure. My question is: Can we build AI that doesn't inherit the worst of us? I think we can. But only if the people building it are asking that question out loud before the product ships. #AI #GenderBias #WomenSafety
-
💃🏽 “𝗪𝗲 𝗼𝘄𝗲 𝘄𝗼𝗺𝗲𝗻 𝗮 𝗰𝗲𝗻𝘁𝘂𝗿𝘆 𝗼𝗳 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵.” – 𝗟𝗶𝘀𝗮 𝗠𝗼𝘀𝗰𝗼𝗻𝗶 Until 1993, women were largely excluded from clinical trials. Not by accident, but by design 👀 Women were left out of research because our biology was seen as disruptive. Hormones made the data harder to control, so the answer was to exclude us 🤷🏽♀️ The default became male & the consequences followed. What worked in the lab didn’t always work in the real world & it still doesn’t ❌ That choice didn’t stay in the past 🔙 You can still see it in drugs that fail to accurately recognise women’s symptoms, in the medtech equipment that doesn’t quite fit in a female surgeon's hand, in the research that skips over the questions that matter to half the population 🌎 As we move into an AI-first future, we’re building on data that never really saw women to begin with. The risk isn’t just bias, it’s getting things wrong at scale 📈 If women aren’t included in the data, the systems we rely on won’t just miss us, they’ll misrepresent us. We need women shaping the research, the trials, the tech – not just for fairness, but so it actually works 📊 If we want healthcare that works for women, we need to start with research that sees us clearly, not as complications, but as standard 💭 𝗪𝗲’𝗿𝗲 𝗻𝗼𝘁 𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝗳𝗼𝗿 𝘀𝗽𝗲𝗰𝗶𝗮𝗹 𝘁𝗿𝗲𝗮𝘁𝗺𝗲𝗻𝘁. 𝗪𝗲’𝗿𝗲 𝗮𝘀𝗸𝗶𝗻𝗴 𝗳𝗼𝗿 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝗿𝗲𝗳𝗹𝗲𝗰𝘁𝘀 𝗿𝗲𝗮𝗹𝗶𝘁𝘆. 𝗧𝗵𝗲𝗿𝗲’𝘀 𝗻𝗼 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺 𝘁𝗵𝗮𝘁 𝗰𝗮𝗻 𝗳𝗶𝘅 𝘄𝗵𝗮𝘁 𝘄𝗲 𝗿𝗲𝗳𝘂𝘀𝗲 𝘁𝗼 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 📏 -- ♻ Re-share if this resonated with you. 👩🏽⚕️ Follow Dr Fiona Pathiraja-Møller for more. #womenshealth #AI #science #clinicaltrials
-
AI just told women to accept 20% less pay A new study from the Technical University of Würzburg-Schweinfurt (linked in comments) just confirmed what many of us suspected: ChatGPT and other AI models systematically recommend lower salaries for women than men with identical qualifications. Up to 20% lower. In some cases, that's a $120,000 difference just by changing "he" to "she" in the prompt. 😵💫 Let that sink in for a moment. As someone who's spent years helping women negotiate their worth, this doesn't shock me. These AI models are trained on data that reflects decades of systemic bias - the same bias that created the gender pay gap in the first place. But here's what concerns me most: women are increasingly turning to AI for career advice, including salary negotiation guidance. And now we know these tools are literally programming women to undervalue themselves. So let me be crystal clear about this: ⚡ Stop outsourcing your worth to machines that don't understand your value! ⚡ Your salary negotiation shouldn't be guided by an algorithm trained on historical inequality. It should be based on your actual market value, the specific problems you solve & the measurable impact you create and linking that to what companies truly need. The real issue isn't just biased AI - it's that many women lack the confidence and skills to negotiate effectively in the first place. And now AI is reinforcing those insecurities with "data-driven" advice that's actually discrimination-driven. Here's what you should do instead: 💪 Learn to negotiate as a core professional skill, focusing on advocating for yourself rather than others (which women tend to struggle more with than men) 💪 Research salary data from multiple sources, including human ones 💪 Build confidence through practice and preparation 💪 Focus on the value you bring, not what others "think" you deserve Because here's the truth: if we don't learn to advocate for ourselves effectively, we'll always be at the mercy of systems - human or artificial - that undervalue us.
-
AI systems built without women's voices miss half the world and actively distort reality for everyone. On International Women's Day - and every day - this truth demands our attention. After more than two decades working at the intersection of technological innovation and human rights, I've observed a consistent pattern: systems designed without inclusive input inevitably encode the inequalities of the world we have today, incorporating biases in data, algorithms, and even policy. Building technology that works requires our shared participation as the foundation of effective innovation. The data is sobering: women represent only 30% of the AI workforce and a mere 12% of AI research and development positions according to UNESCO's Gender and AI Outlook. This absence shapes the technology itself. And a UNESCO study on Large Language Models (LLMs) found persistent gender biases - where female names were disproportionately linked to domestic roles, while male names were associated with leadership and executive careers. UNESCO's @women4EthicalAI initiative, led by the visionary and inspiring Gabriela Ramos and Dr. Alessandra Sala, is fighting this pattern by developing frameworks for non-discriminatory AI and pushing for gender equity in technology leadership. Their work extends the UNESCO Recommendation on the Ethics of AI, a powerful global standard centering human rights in AI governance. Today's decision is whether AI will transform our world into one that replicates today's inequities or helps us build something better. Examine your AI teams and processes today. Where are the gaps in representation affecting your outcomes? Document these blind spots, set measurable inclusion targets, and build accountability systems that outlast good intentions. The technology we create reflects who creates it - and gives us a path to a better world. #InternationalWomensDay #AI #GenderBias #EthicalAI #WomenInAI #UNESCO #ArtificialIntelligence The Patrick J. McGovern Foundation Mariagrazia Squicciarini Miriam Vogel Vivian Schiller Karen Gill Mary Rodriguez, MBA Erika Quada Mathilde Barge Gwen Hotaling Yolanda Botti-Lodovico
-
Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.
-
As a grad student at MIT Media Lab, Dr. Joy Buolamwini experienced firsthand how AI models are as good, bad, or biased as the data they're trained on. She was building interactive art which allows users to control projected patterns by moving their heads. But the popular commercial facial-recognition software she used wouldn't recognize her own Black face. Join our eye-opening conversation on this new episode of my #AskMoreOfAI podcast. We talk about data bias translating into model bias, creating more inclusive data sets, and how Joy's nonprofit advocacy group The Algorithmic Justice League is partnering with government and business leaders like Salesforce to address ethical challenges in AI. Joy also shares highlights from her new book “Unmasking AI,” which drops this week! #aibias #aiethics #databias Watch: https://lnkd.in/gdH7JHAd
-
What happens when a Black woman switches her gender on LinkedIn to “male”? …apparently not the same thing that happens to white women. ✨ Over the past week, I’ve watched post after post from white women saying their visibility skyrocketed the moment they changed their profile gender from woman → man. More impressions. More likes. More reach. 📈 So I tried the same thing. And my visibility dropped. 👀 Here’s why that result matters: these experiments are being treated as if they’re only about gender; in reality they reveal something deeper about race + gender + algorithmic legitimacy. 🔍 A white woman toggling her gender is basically conducting a test inside a system where her racial credibility stays constant. She changes one variable. The algorithm keeps the rest of her privilege intact. 💡 When a Black woman does the same test? I’m not stepping into “white male privilege”; I’m stepping into a category that platforms and society have historically coded as less trustworthy, less safe, or less “professional.” Black + male is not treated the same as white + male. Not culturally. Not algorithmically. 🧩 So while white women are proving that gender bias exists (which is true), they’re doing it without naming the racial insulation that makes their results possible. Meanwhile, Black women and women of color are reminded—again—that we can’t separate gender from race because the world doesn’t separate them for us. 🗣️ This isn’t about placing blame; it’s about widening the conversation so the conclusions match the complexity. 🌍 If we’re going to talk about bias, visibility, and influence online, we cannot pretend we all start from the same default settings. 🔥 I’m curious: Have you run your own experiment with identity signals on this platform? What changed… and what didn’t? 👇🏾
-
All AI models have bias - they have been designed that way. This means companies need to proactively plan to overcome biased outputs when they release new models to the public. Reducing bias is possible, but it requires testing and conditioning of models before they are released in order to produce more balanced outputs. A model knows what it knows about the world because of its training material. If you customise a model to be really good at visualising cats, you feed it loads of great photos of cats with the correct labels and, hey presto - you have a model that can produce great images of cats. When we start applying models to business settings, we need to be aware of their biases. The training material for diffusion image models are often trained on large public data sets like "Laion 5B" and this means the models will exhibit the same biases as the data inputs and labels. Here I'm showing examples of using a popular online "upscaler" - this tool takes low-resolution images and balloons them in size and quality. It works very effectively, but at the same time as doing this, it will also reimagine and reinvent parts of the image; including the people. It goes so far as to change the ethnicity, disabilities and gender of people. It is quite shocking to see and I'm afraid having tested across several examples of architectural renders and it is quite consistent. Part of the basic QA of a commercial model should include a "red team" testing process where these kinds of behaviours are conditioned out of the model, before its release. Many companies are doing this effectively, but not all. As users we need to be aware that this can easily happen and must apply the same standards that we would for any other text or image we create using traditional methods in practice. #generativeai #ethicalai #bias
-
AI use in hiring can amplify bias even with human-in-the-loop. New research from UW and Indiana University found that when people work alongside AI to screen resumes, they mirror the AI's biases up to 90% of the time - even when they believe the AI recommendations are low quality. The study (N=528, across 1,526 scenarios) found that without AI, people selected candidates of all races equally. However, with biased AI, decisions shifted dramatically to favor AI-recommended groups. This happened regardless of whether bias aligned with OR contradicted stereotypes The HITL paradox - when you implement "human-in-the-loop" systems assuming humans will catch AI mistakes, humans may instead become conduits for algorithmic bias. One bright spot in their research found that completing implicit bias training BEFORE using AI increased selection of stereotype-incongruent candidates by 13%. The bottom line: AI-assisted hiring needs more than just human oversight...it requires: - Rigorous third-party fairness audits - Pre-task bias awareness training - Recognition that AI recommendations profoundly shape human judgment If your organization uses AI in hiring, ask: - Who's auditing it? - How are you training evaluators? - Are you measuring outcomes by demographic group? The risk isn't just legal - it's perpetuating inequality at scale. Full study here: https://lnkd.in/efJeMAbW P.S. imagine if this study didn't use AI for recommending resumes, but biased people recommending resumes to other people...how would bias pass through differently? #AIEthics #HRTech #Hiring #Bias #FutureOfWork
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development