Issues with gender inference in tech research

Explore top LinkedIn content from expert professionals.

Summary

Issues with gender inference in tech research refer to challenges and biases that arise when technology or AI systems attempt to identify or process gender information, often resulting in unequal outcomes and perpetuating existing disparities, especially for women and marginalized groups. These problems can impact access to technology, workplace evaluations, and the accuracy of AI tools, making it vital to address gender bias to ensure fair and inclusive tech environments.

  • Expand training data: Prioritize collecting and curating diverse datasets that accurately represent a wide range of gender identities and perspectives for AI development.
  • Revise evaluation methods: Redesign workplace assessments and performance reviews to focus on results rather than the methods or tools used, helping reduce bias against those using AI assistance.
  • Include gender perspectives: Work with stakeholders and policymakers to integrate gender considerations into regulatory frameworks, tech access programs, and leadership initiatives.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr. Patrice Torcivia Prusko

    Strategic, visionary leader, driving positive social change at the intersection of technology and education.

    5,213 followers

    My recent research, which examines the adoption of emerging technologies through a gender lens, illuminates continued disparities in women's experiences with Generative AI. Day after day we continue to hear about the ways GenAI will change how we work, the types of jobs that will be needed, and how it will enhance our productivity, but are these benefits equally accessible to everyone? My research suggests otherwise, particularly for women. 🕰️ The Time Crunch: Women, especially those juggling careers with care responsibilities, are facing a significant time deficit. Across the globe women spend up to twice as much time as men on care and household duties, resulting in women not having the luxury of time to upskill in GenAI technologies. This "second shift" at home is increasing an already wide divide. 💻 Tech Access Gap: Beyond time constraints, many women face limited access to the necessary technology to engage with GenAI effectively. This isn't just about owning a computer - it's about having consistent, uninterrupted access to high-speed internet and up-to-date hardware capable of running advanced AI tools. According to the GSMA, women in low- and middle-income countries are 20% less likely than men to own a smartphone and 49% less likely to use mobile internet. 🚀 Career Advancement Hurdles: The combination of time poverty and tech access limitations is creating a perfect storm. As GenAI skills become increasingly expected in the workplace, women risk falling further behind in career advancement opportunities and pay. This is especially an issue in tech-related fields and leadership positions. Women account for only about 25% of engineers working in AI, and less than 20% of speakers at AI conferences are women. 🔍 Applying a Gender Lens: By viewing this issue through a gender lens, we can see that the rapid advancement of GenAI threatens to exacerbate existing inequalities. It's not enough to create powerful AI tools; we must ensure equitable access and opportunity to leverage these tools. 📈 Moving Forward: To address this growing divide, we need targeted interventions: Flexible, asynchronous training programs that accommodate varied schedules Initiatives to improve tech access in underserved communities. Workplace policies that recognize and support employees with caregiving responsibilities. Mentorship programs specifically designed to support women in acquiring GenAI skills. There is great potential with GenAI, but also risk of leaving half our workforce behind. It's time for tech companies, employers, and policymakers to recognize and address these gender-specific barriers. Please share initiatives or ideas you have for making GenAI more inclusive and accessible for everyone. #GenderEquity #GenAI #WomenInTech #InclusiveAI #WorkplaceEquality

  • View profile for Cristóbal Cobo

    Senior Education and Technology Policy Expert at International Organization

    39,445 followers

    Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models The International Research Centre on Artificial Intelligence (IRCAI), under the auspices of UNESCO, in collaboration with UNESCO HQ, has released a comprehensive report titled “Challenging Systematic Prejudices: An Investigation into Gender Bias in Large Language Models”. This groundbreaking study sheds light on the persistent issue of gender bias within artificial intelligence, emphasizing the importance of implementing normative frameworks to mitigate these risks and ensure fairness in AI systems globally. "...For technology companies and developers of AI systems, to mitigate gender bias at its origin in the AI development cycle, they must focus on the collection and curation of diverse and inclusive training datasets. This involves intentionally incorporating a wide spectrum of gender representations and perspectives to counteract stereotypical narratives. Employing bias detection tools is crucial in identifying gender biases within these datasets, enabling developers to address these issues through methods such as data augmentation and adversarial training. Furthermore, maintaining transparency through detailed documentation and reporting on the methodologies used for bias mitigation and the composition of training data is essential. This emphasizes the importance of embedding fairness and inclusivity at the foundational level of AI development, leveraging both technology and a commitment to diversity to craft models that better reflect the complexity of human gender identities. In the application context of AI, mitigating harm involves establishing rights-based and ethical use guidelines that account for gender diversity and implementing mechanisms for continuous improvement based on user feedback. Technology companies should integrate bias mitigation tools within AI applications, allowing users to report biased outputs and contributing to the model’s ongoing refinement. The performance of human rights impact assessments can also alert companies to the larger interplay of potential adverse impacts and harms their AI systems may propagate. Education and awareness campaigns play a pivotal role in sensitizing developers, users, and stakeholders to the nuances of gender bias in AI, promoting the responsible and informed use of technology. Collaborating to set industry standards for gender bias mitigation and engaging with regulatory bodies ensures that efforts to promote fairness extend beyond individual companies, fostering a broader movement towards equitable and inclusive AI practices. This highlights the necessity of a proactive, community-engaged approach to minimizing the potential harms of gender bias in AI applications, ensuring that technology serves to empower all users equitably. https://lnkd.in/eTyr6XTn

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,445 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Dr. Preet Deep Singh

    Top 100 in AI (Global) | Vice President, Apna.Co and Blue Machines AI

    12,717 followers

    AI is supposed to be the great equaliser. But in many teams, it's becoming a new gatekeeper—especially for women. A pre-registered experiment with over 1,000 engineers found a striking "competence penalty": when reviewers knew code was written with AI assistance, they rated the engineer's competence 9% lower on average—even when the work was identical. For female engineers, the penalty was more than twice as harsh: a 13% drop in perceived competence versus 6% for men. Non-AI users—especially men—imposed the largest penalties. As a result, many engineers, especially women, avoid using AI to protect their professional reputations, despite the productivity gains these tools bring. If we want an AI-powered workforce, we cannot penalise the very people who adopt innovation to deliver better outcomes. Bias that disguises itself as "craft purity" or "real engineering" punishes women twice: first for being underrepresented in tech, and again for leveraging modern tools. What leaders can do now: - Listen to underrepresented groups about barriers to AI adoption. - Make respected role models visible users of AI. - Redesign evaluations to focus on results, not methods. - Remove or anonymise 'AI-assisted' tags in performance reviews. Progress is not about tools—it's about choosing not to penalise those who use them. Citation: Acar, O. A., Gai, P. J., Tu, Y., & Hou, J. (2025). "Competence Penalty Is a Barrier to the Adoption of New Technology." SSRN. https://lnkd.in/ek3UdnqZ

  • View profile for Babak Abbaschian, Ph.D.

    CTO, CAIO | PhD | Generated $200M Revenue via AI & Machine Learning-Based Digital Transformation | Led Innovation and Strategic Growth

    3,625 followers

    In our most recent research, "Building a Gender-Bias-Resistant Super Corpus as a Deep Learning Baseline for Speech Emotion Recognition," we address the critical issue of Gender Bias in speech emotion recognition (SER) systems. By constructing a comprehensive and balanced dataset, we provide a robust foundation for training deep learning models that interpret emotional cues more accurately and equitably across genders. Our work is a step toward creating fairer, more inclusive SER technologies that reflect the emotional diversity of all users. As this field evolves, we invite fellow researchers to think about, how can we continue to uncover and mitigate biases in AI/ML systems to ensure equitable outcomes across all populations? https://lnkd.in/gTUErX4H #research #publication #AIML #SpeechRecognition #SpeechEmotionRecognition #Fairness #AIEthics

  • View profile for Danielle Wipperfurth

    Customer Experience Strategy @ Boston Scientific | I lead teams that help companies grow and retain their best customers

    4,987 followers

    Me: “ChatGPT, why do you think every successful professional is a man?” Like many professional women, I was caught off guard when A.I. generated a male image of me based on our interactions. ChatGPT: “Oof, fair callout. That default-to-male thing? It's baked into way too much of how the world (and yeah, even AI training data) frames ‘success,’ and it’s total BS. It reflects the bias in the data, not reality.” While ChatGPT and I are back on speaking terms 😉, the exchange sparked a deeper curiosity: What’s actually being done to combat bias in artificial intelligence? Here’s a quick look at current efforts: ● Diverse Training Data – Actively expanding datasets to include broader representation across gender, race, geography, and more. ● Bias Auditing Tools – Software to detect and flag discriminatory outputs (e.g., Fairness Indicators, AI Fairness 360). ● Human-in-the-Loop Review – Bringing diverse human reviewers into model evaluation to catch what algorithms might miss. ● Transparency & Explainability – Demanding models show their work—literally—with more interpretable outputs. ● Regulation & Ethics Boards – Governments and institutions setting guardrails (e.g., EU AI Act, IEEE, NIST). ● Open Research Collaboration – Shared datasets and bias benchmarks (e.g., BIG-bench, Holistic Evaluation of Language Models). Experts believe these efforts will be moderately effective in the short term, while long-term global bias mitigation remains a work in progress—some are optimistic, others cautious. Curious to hear from you: Have you noticed gender or identity bias in AI tools, whether obvious or subtle? Do you think enough is being done to combat it? 👇 #AIethics #BiasInAI #ResponsibleAI #MachineLearning #GenderBias #TechForGood #WomenLeaders

Explore categories