Women are changing their gender to "male" on LinkedIn to prove the algorithm is biased. And it's working. Visibility shouldn’t require women to sound less like ourselves. Yet here we are. All over my feed this week, women are changing their gender markers to “prove” a point – and the results are loud, undeniable, and honestly painful. Engagement jumps. Comments triple. Reach explodes. The bias becomes visible in real time. And I get it. It’s important work. It quantifies what many of us have felt for years but couldn’t “prove” in a way algorithms take seriously. Here’s the tension I can’t shake: If we keep performing masculinity to be heard… what are we training the algorithm to believe about our real voices? Because short-term proof comes with long-term consequences: Maybe.... → We risk signalling to the platform that feminine-coded communication isn’t “valuable” → Diluting the diversity the algorithm should be learning to amplify → We risk becoming attached to the virality instead of the message → We slowly lose the nuance, softness, depth, and empathy many women naturally write with → We normalize the idea that adaptation is the only survival strategy And that’s a cost I’m not willing to ignore. This isn’t about calling out women experimenting. It’s about asking a critical question: Are we documenting the bias… or accidentally reinforcing it? There has to be another way.... ....a better way. One that strengthens our collective voice instead of bending it. Rachael (whose article sparked this reflection) said something powerful: “Instead we could all find ten women’s posts per day – follow them, engage, and amplify their voices.” And dare I add – resharing their content with the same energy we’re giving this experiment? Maybe that’s the real secret sauce. Imagine that. Thousands of us intentionally raising the volume of women who are already writing with excellence, insight, and emotional intelligence. We are not gaming the system. We are reshaping it – one amplified voice at a time. And perhaps it’s time LinkedIn joined the conversation. So here’s my stance: We don’t beat bias by becoming less of ourselves. We beat it by being so visible, amplified, united, and supported – the algorithm has no choice but to learn. This conversation isn’t about virality. It’s about voice. And how we choose to protect it. What do you think? Is there a better way forward than swapping genders just to be heard? I’d love to hear thoughtful perspectives – especially from women navigating this tension in real time.
Tech-Driven Workforce Diversity
Explore top LinkedIn content from expert professionals.
-
-
Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the user’s reflection. But the software couldn’t detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film “AI, Ain’t I A Woman?”, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.” She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on “Safe, Secure, and Trustworthy AI.” But she didn’t stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesn’t just reflect society. It amplifies its flaws. Fortune calls her “the conscience of the AI revolution.” 💡 In 2025, I’m sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.
-
AI isn’t just a powerful tool for accelerating sustainability work. It can also help us move faster in advancing human rights - and we’re piloting AI models that do just that. Amazon has hundreds of thousands of suppliers worldwide - that’s a massive scope. So we’re harnessing AI to keep pace, prevent, and respond to human rights risks in our network. Here are two examples of how that’s taking shape: 🔍 Smarter Risk Prediction: We developed an AI model that can analyze tens of thousands of historical social audits to identify patterns, spot warning signs, and flag high-risk suppliers - essentially helping us zoom in on what matters. The testing results were impressive - the tool successfully identified about 9 out of every 10 high-risk sites, with 85% overall accuracy. ⏱Faster insights: It can take a human rights manager up to four hours to manually review a supplier audit report. But we developed an AI tool that processes a report in just minutes - identifying risks, rating the seriousness, and suggesting next steps. Early versions helped us process audit reports 65% faster - a remarkable difference! It’s important to note - these AI tools aren’t replacing human decision-making. They’re designed to support, enhance and accelerate our work. Every AI recommendation gets reviewed by our experts - and their input actually helps improve the system over time. We’re still in early stages, but I’m inspired by the potential. On this #HumanRightsDay, I invite you to learn more about our work from Devex’s comprehensive interview with Leigh Anne DeWine, our Director of Human Rights & Social Impact, who is making a great impact every day here at Amazon. Thanks to Leigh Anne and our entire Human Rights and Social Impact team for the incredibly critical work you do. 🙏 https://lnkd.in/gSWZAWFB
-
One simple hack for every software agency to train their people and keep up with the AI revolution: Get engaged in social impact projects, pro bono. Seriously, I get this question so often when talking to software vendors: “I'd love to use our skills for good, but what do we get if we get involved?” There’s a lot to say – building a portfolio, go-to-market strategies, retaining talent. Did you know that 86% of Gen Zs and 89% of millennials say a sense of purpose is important to overall job satisfaction? But one golden use case has emerged recently – a true win-win-win. What we’ve seen at Tech To The Rescue over the past five years is simple: Social impact work isn’t charity. It’s how companies sharpen their edge. When tech teams partner with nonprofits tackling crises – like Mercy Corps or ACAPS – they’re not just donating skills. They’re learning in ways no corporate client could teach them. They’re designing for chaos, building for the underserved, and stretching their creativity to the limit. For small and mid-sized companies, this work is a goldmine of tactical insight: - Designing systems for low-bandwidth environments - Creating tools for users with limited digital literacy - Adapting platforms for multilingual emergency contexts These aren’t side projects. They’re previews of the challenges companies will face as they scale into new markets. We call it R&D with purpose. That’s why every week, more than 10 software agencies join Tech To The Rescue. Today, three out of four of them treat AI upskilling as the key business driver for their engagement. It’s also why top minds join the movement to power up nonprofits with their expertise – just like Werner Vogels, Amazon’s CTO, who mentors nonprofit CTOs directly, giving them access to the same leadership and technical playbook that powers Amazon Web Services (AWS) (all done under the Now Go Build CTO Fellowship, created in partnership with our AI for Changemakers program). My op-ed on this has just been published by Fast Company (thanks to magnificent Jean Ekwa!). I’d love to hear your thoughts – please drop them below. 👏 Lars Peter Nissen, Alicia Morrison, Yevhen B.
-
I’ve been reflecting on how we often consider future skills, digital transformation, or STEM careers without addressing a hard truth: socioeconomic disadvantage continues to block millions from accessing opportunity. And in the UK, that disadvantage is often as simple—and as serious—as a lack of internet. Here’s what that looks like: 📉 1.5 million UK homes are without internet access. For many students, this means no online homework, no virtual STEM clubs, and no exposure to the digital skills needed for tomorrow’s jobs. 🧪 STEM education is still uneven. Pupils from the most deprived areas are less likely to access advanced science and maths courses, and much less likely to pursue STEM careers. 🔌 Connectivity is key—and telecoms can help. A brilliant example? The National Databank, supported by Virgin Media O2 and Good Things Foundation. It’s been called a “food bank for data,” offering free mobile data, texts, and calls to people who can’t afford connectivity. Many O2 stores across the UK now serve as data donation hubs—bringing digital access right into local communities. 🧠 The result? Students stay connected. Adults can retrain. Families can access services. And no one is locked out of opportunity because they can’t afford data. Tech and telecoms companies have a real role in levelling the playing field—not just in innovation, but in inclusion. 💬 What other examples have you seen of organisations using infrastructure for impact? Let’s build a future where no potential is wasted because of a postcode. #DigitalInclusion #NationalDatabank #STEMAccess #TechForGood #LevellingUp #UKTech #SocialMobility #Telecommunications #DigitalEquity #FutureOfWork #InclusionMatters
-
Access to justice organizations present unique opportunities for technology companies. Some thoughts on effectively engaging with this impactful sector: 1. Prioritize affordability: Develop flexible pricing models, including sliding scales based on organizational size or client volume. Consider offering pro bono licenses to qualifying nonprofits. 2. Streamline intake processes: Demonstrate how your case management system can reduce initial client screening time by 50% or how your chatbot can triage inquiries, freeing up staff for complex cases. 3. Emphasize data privacy: Highlight robust anonymization features and compliance with domestic violence shelter confidentiality requirements. Detail your approach to handling sensitive immigration status information. 4. Design for accessibility: Create interfaces optimized for users with limited digital literacy. Ensure compatibility with screen readers and offer multilingual support for common languages in underserved communities. 5. Form community partnerships: Collaborate with bar associations and law schools to gather insights on unmet legal needs. This informs product development and builds credibility with potential clients. 6. Develop social impact metrics: Invest in analytics that quantify your technology's effect on case outcomes, time saved, or number of additional clients served. This data supports grant applications and impact reporting. 7. Address specific legal domains: Tailor solutions for high-need areas like eviction defense, debt collection, or public benefits appeals. Offer modules that incorporate relevant local laws and court procedures. 8. Facilitate knowledge sharing: Implement features that allow easy creation and distribution of know-your-rights materials or pro se resources, amplifying the reach of limited legal staff. The stakes in this market extend far beyond profit margins. By developing tools that expand access to justice, tech companies have the potential to reduce inequality, prevent homelessness, protect domestic violence survivors, and strengthen the very fabric of civil society. Those who successfully navigate the unique challenges of this sector won't just capture market share – they'll play a pivotal role in fulfilling the promise of equal justice under law. #legaltech #innovation #law #business #learning
-
The UK government just acknowledged something most digital transformation programmes quietly ignore. Technology is not the constraint anymore. Access is. Published on 24 March 2026, the Digital Inclusion Action Plan One Year On outlines progress on the government’s commitment to digital access. It aims to ensure everyone, regardless of their circumstances, can get connected and go online safely and with confidence. But 1.6 million people have no internet connection at all, and many more do not have the right device or the skills to use the internet at work and in life. People who are not online are often also disabled, older, or have lower incomes. Without internet access and digital skills, they end up paying more for bills, trapped earning less, or unable to easily get the support they need. That is a structural inequality being reinforced by every service that moves exclusively online without asking who gets left behind. The plan is doing something other digital strategies rarely do. It explicitly commits to ensuring online services are simple to use with offline options too, and to putting in place more trusted local help. That second part matters enormously. Most digital transformation programmes treat non digital access as a temporary concession until everyone catches up. This plan treats it as a permanent design requirement. The plan highlighted groups more likely to struggle, including low income households, older people, disabled people, unemployed people, and some young people not in employment, education or training. These are precisely the groups who most depend on government services. Building services that work beautifully for connected, confident users whilst excluding the most vulnerable is not digital transformation. It is digital displacement. The real test of this plan is not whether it reaches the people already online. It is whether it changes how departments design services for the people who are not. How many of your services assume digital access that your most vulnerable users do not have? #DigitalInclusion #GovTech #PublicSector
-
AI bias is NOT a bug. It's a feature we never wanted. I learned this the hard way when our "fair" AI system failed every woman who applied. That was my wake-up call. 2025 isn't about whether AI has biases → it's about what we're doing to fix them. ❌ We can't fix AI bias with more biased data. 🔻 The solution? → Curate like your ethics depend on it. ❇️ Diverse datasets reflecting ALL genders, races, communities ❇️ Data governance tools that actually govern ❇️ Quality control that goes beyond "clean enough" I heard that one team spent 6 months cleaning data and saved 2 years of bias cleanup later. Pre-processing and post-processing are your best friends. Technical solutions that actually solve things: Bias detection tools → not just fancy dashboards. Fairness-aware algorithms → coded with intention. AI governance platforms → that govern, not just monitor. We need systems that catch bias before it catches us. 👇 But here's what surprised me: The most effective solutions are not technical → they're human. Diverse teams catch biases early. Ethicists at the design table. Social scientists in the code reviews. Red teams that actually attack assumptions. Corporate accountability is coming. Ethical frameworks are evolving. Inclusive policies are becoming law. Tech companies will be held accountable for every bias, especially political ones. → Explainable AI that actually explains → Human oversight with real authority → Public education that creates informed users 𝘞𝘦 𝘤𝘢𝘯'𝘵 𝘩𝘪𝘥𝘦 𝘣𝘦𝘩𝘪𝘯𝘥 "𝘢𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘪𝘤 𝘤𝘰𝘮𝘱𝘭𝘦𝘹𝘪𝘵𝘺" 𝘢𝘯𝘺𝘮𝘰𝘳𝘦. ⚠️ Gender bias gets special attention: Diverse datasets AND diverse teams. AI detecting gender pay gaps. Safety tools that actually protect victims. Women are watching. We're measuring. The emerging trends that matter: Explainable AI (XAI) → making decisions understandable. User-centric design → for ALL users. Community engagement → not corporate tokenism. Synthetic data → creating unbiased training sets. Fairness-by-design → embedded from day one. We're reimagining how AI gets built. - From the data up. - From the team out. - From the ethics in. The companies that get this right will win. Because bias isn't just a technical problem. ➡️ It's a human rights issue. What's the most surprising bias you've discovered in your work?
-
Last week, I had the privilege of joining the Tech To The Rescue, AI for Changemakers accelerator program, focusing on disaster management. Together with fellow board members at Disaster Tech Lab, we dove deep into building AI strategies aligned with humanitarian goals. We started by reassessing our purpose. For us at Disaster Tech Lab, it's about leveraging AI to enhance disaster preparedness, response, and recovery. Key insights: 1️⃣ Define your purpose: Identify the specific problem your AI solves in disaster response. For us, it's improving decision-making in critical moments. 2️⃣ Craft your mission: Articulate how you'll achieve your purpose and who benefits. We're focused on equipping first responders with AI-driven insights. 3️⃣ Envision the future: We see a world where AI seamlessly integrates into global humanitarian efforts, dramatically improving response times and aid delivery. 4️⃣ Align strategy with purpose: We've developed a roadmap that includes partnering with key stakeholders in the humanitarian sector and setting clear milestones for our AI development. 5️⃣ Collaborate: One of the most valuable outcomes was connecting with other organizations. We're now exploring data-sharing initiatives to collectively advance AI in disaster management. The experience reinforced that while AI is a powerful tool, its true potential in humanitarian work is realized when guided by a clear purpose and collaborative spirit. As we continue this journey, we're more committed than ever to harnessing AI's power to serve humanity and create a more resilient world. 🌍💪 How is your organization aligning AI with its business goals to drive social impact? I'd love to hear your thoughts! #AI #DisasterResponse #HumanitarianAid #TechForGood
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development