Our new paper, “Detecting bias in algorithms used to disseminate information in social networks and mitigating it using multiobjective optimization,” is now out in PNAS Nexus. Algorithms that determine how information spreads — from health campaigns to social media posts — are often optimized for one goal: maximizing reach. But what happens when reach comes at the expense of equity? Paper: https://lnkd.in/epNgJSnP ArXiv: https://lnkd.in/eAkVWwAK In this work, led by Vedran Sekara and with Ivan Dotu, Manuel Cebrian, and Manuel García-Herranz, we show that state-of-the-art influence maximization algorithms — the same kind used to identify “influencers” in social networks — systematically leave parts of the network behind. Some groups receive information late, others not at all. In other words, algorithmic bias can emerge not from data, but from the mathematical definition of the problem itself. To address this, we developed a multiobjective algorithm that balances spread and fairness. The result: we can significantly reduce informational inequality with only a minimal loss in reach. This suggests that optimization and equity don’t have to be opposing goals. As algorithms increasingly shape who gets access to opportunities, resources, and knowledge, this kind of fairness-aware design becomes essential — not just for social media or marketing, but for public health, disaster response, and social resilience.
Inclusive Algorithm Design
Explore top LinkedIn content from expert professionals.
Summary
Inclusive algorithm design means creating algorithms—like those used in AI and social networks—that intentionally consider and serve the needs of all users, especially people from underrepresented or marginalized groups. This approach aims to reduce bias, promote fairness, and ensure that technology does not leave anyone behind.
- Broaden data sources: Regularly review and expand your datasets to include experiences and information from diverse communities, not just those with the most digital presence.
- Invite diverse voices: Engage people from different backgrounds and regions in every stage of the design process, from brainstorming to testing, to ensure the algorithm meets real-world needs.
- Check for fairness: Continuously audit and evaluate your algorithms for bias and discriminatory outcomes, adjusting them to better reflect equity and inclusion goals.
-
-
How can we ensure that #ArtificialIntelligence respects human rights and societal values? This paper delves into the challenges and solutions for integrating human rights into the design and implementation of #AI systems. It introduces a framework called "Design for Values," which draws on methodologies like Value Sensitive Design and Participatory Design. The paper presents a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements. This is accomplished through a structured, inclusive, and transparent process that aims to bridge the socio-technical gaps often present in AI development. 1️⃣ Socio-Technical Gaps: The paper identifies a critical gap between technical solutions and societal context, often resulting in AI systems that may inadvertently violate human rights. 2️⃣ Design for Values Framework: The paper introduces a comprehensive framework that aims to bridge these gaps by translating moral and social values into design requirements for AI systems. 3️⃣ Tripartite Methodology: The framework employs a three-pronged approach—Conceptual, Empirical, and Technical investigations—to ensure that the design process is iterative and integrative. 4️⃣ Stakeholder Engagement: The paper emphasizes the importance of involving societal stakeholders in the design process to ensure that the AI systems are aligned with human rights and societal norms. 5️⃣ Local Meaning and Context: The paper stresses the need to consider local social practices and language to make the AI systems more context-sensitive and ethical. The paper provides a well-structured roadmap for designing AI systems that are aligned with human rights and societal values. It offers actionable insights and methodologies that can be applied across various domains, making it a must-read for anyone involved in the development or governance of AI technologies. ✍🏻 Evgeni Aizenberg and jeroen van den hoven. Designing for human rights in AI. Big Data & Society 2020 7:2. DOI: 10.1177/2053951720949566 ✅ Sign up for our newsletter to stay updated on the most fascinating studies related to digital health and innovation: https://lnkd.in/eR7qichj
-
New responsible AI paper with Christina N. Harrington, PhD and Shaun Kane! We synthesized categories of disability, health, and accessibility representation and intersectional harms, as evaluated by people with diverse disabilities and health conditions who created AI images with us during an interactive interview study. We'll present this at the upcoming Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) conference! Takeaways: * Disability representation is about so much more than disabilities; not everyone who experiences health conditions or ableism identifies as disabled. Our expansion to disability, health, and accessibility (DHA) was intentional and is in deference to Sami Schalk's book, Black Disability Politics and the greater Disability Justice Movement. * DHA representation in AI images not only refers to how people look but also concerns access technologies, objects, actions, and motions that may signify symptoms, and when combined with generic terms like activities, participants expected that they be shown done in an accessible manner (e.g., an image of disabled people doing yoga should show adaptive yoga techniques). * We point out intersectional harms including: environments were often depicted as upper class, colorism (in addition to predominance of default AI images of white disabled people), and body size homogenization. * Often, successive prompts did not result in "better" images, but could even be worse than original images--something our participants identified particularly when they prompted for multiple representation characteristics (e.g., people with different races, ages, and disabilities). What do we do now? * Proactively include DHA in evals: Models change but people with disabilities and health conditions deserve access to realistic representations which dignify and celebrate diverse life experiences and identities. If you can make better AI image representations than this paper, great. We still need to evaluate representation inclusive of DHA; this need will not go away. * Community members need not just evaluate outputs, but should develop test prompts in their own words. * Our categories could scaffold term taxonomies and test prompt sets. * Intersectional harms is *another* reason for those of us focusing on different aspects of representation to work together. * Iteratively prompting AI models was an interesting method to spur evaluations, warranting future research. * Accessible research is not just for accessibility research: I and others have said this but it is always worth mentioning. The substance of evaluations doesn't matter if the tools, environments, and processes for eval completion aren't accessible. https://lnkd.in/ehsQ_cAd
-
AI is only as inclusive as the voices driving its development. The way we build and implement AI today will determine how it serves tomorrow. The choice is ours. It has the potential to reshape industries, but if left unchecked, it risks deepening societal divides and the inclusion gap. While we see progress. Western-centric AI development has perpetuated biases by relying on incomplete data and overlooking underserved regions. To shift this narrative, we need to move beyond the buzzwords and focus on tangible actions. Here’s how: → Diversify the data: We must actively collect and incorporate data from underrepresented regions, ensuring AI systems reflect diverse needs and experiences. → Empower diverse talent: AI development must include voices from all communities. We need initiatives that nurture talent in underserved populations to bring fresh perspectives into tech. → Engage globally: Policymakers, tech companies, and healthcare providers must collaborate, ensuring AI solutions are designed for global accessibility. → Hold ourselves accountable: Regular audits for bias in AI systems should become the norm. → Rethink governance: We need inclusive AI governance that prioritizes representation, particularly when it comes to health and social welfare. → Learn from local experts: Before implementing AI in new regions, tech developers must work alongside local experts to understand cultural nuances and real-world needs. Moreover, applying the 4D Framework: Develop, De-identify, Decipher, De-bias. We can create AI systems that are not just smarter, but also fairer, more inclusive, and global. It’s time to change the conversation. But this isn't just about building better tech. It's about expanding access, education, and funding to communities that have been left behind. It’s about ensuring that every person, no matter where they live, has a seat at the table. AI’s future doesn’t belong to one group. It belongs to all of us. The real question is: Will we design it for everyone?
-
Feel like you have the perfect dataset to train an inclusive AI model? Think again. Have you truly considered someone's full, embodied experience beyond the digital footprint that exists about them? ❓ "𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗱𝗮𝘁𝗮 𝘁𝗿𝗮𝗶𝗹𝘀 𝘁𝗲𝗹𝗹 𝗮 𝘀𝘁𝗼𝗿𝘆, 𝗯𝘂𝘁 𝗶𝘀 𝗶𝘁 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝘀𝘁𝗼𝗿𝘆?" ❓ Often, we build training datasets based on available data. Unfortunately, this reifies existing digital inequalities as some people have more digital traces than others. 𝗧𝗼 𝗯𝘂𝗶𝗹𝗱 𝗺𝗼𝗿𝗲 𝗶𝗻𝗰𝗹𝘂𝘀𝗶𝘃𝗲 𝗔𝗜 𝗺𝘆 𝗿𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲: 1. Start by assessing your dataset for who is, and who is not, included. 2. Think about the lives of the people included, build 5 “day in the life” personas. 3. Assess each “day in the life” for what data points that are digitally captured. 𝘼𝙣𝙙 𝙬𝙝𝙖𝙩 𝙩𝙝𝙚𝙮 𝙢𝙞𝙨𝙨. 4. Groundstruth your findings with your end users, asking them to fill in any blanks and validate your assessment. 5. Refine your data sources to more holistically capture the realities of the people you aim to serve. Or, gulp, walk away from AI for now. (yes, that's an option). If you're interested in inclusive AI, I recommend reading the amazing work of Alexandra R. , Alex Kessler, and Jacobo Menajovsky through Center for Financial Inclusion (CFI). Read the full report: https://lnkd.in/eNwFW4Ha
-
Using AI to Recognize Exclusion: The Microsoft Inclusive Tech Lab just published a great resource showing how to use generative AI as a thinking partner, not a checklist, to uncover where our designs might unintentionally exclude people. It walks you through prompts for websites, apps, and games, all built around the inclusive-design principle “Recognize Exclusion.” The AI generates examples of where people could get left out (perceivability, operability, understandability), which you can then validate with real people with disabilities.This is such a great example of how AI can expand empathy and awareness, instead of just automating compliance. Definitely worth a read: https://lnkd.in/gzshGu7e #a11y
-
As AI systems become more prevalent, teams must use inclusive design to build with marginalized communities to reduce the possibility of harm to their communities caused by AI. Ioana Tanase and I partnered with Hanna Wallach's team to leverage the inclusive design methodology to build generative AI systems that consider the disability community. Here is an overview of the process: 1. Identify the risks by partnering with the disability community to understand fairness-related risks affecting people with disabilities. 2. Turn those findings into a systematized concept to help develop methods to measure the risks. 3. Revise systems as needed. 4. Monitor the technology to ensure a better experience for people with disabilities. Check out this article to learn more about the process and the importance of measurement to reduce harm. https://lnkd.in/ezhEdPbc Big thanks to our partners at Microsoft Research: Chad Atalla, Dan Vann, Emily Corvi, Hannah Washington, Tricia McDonough, and Stefanie Reed.
-
Designing AI for Outliers: Why Inclusion is Key to Equitable AI. In a recent podcast discussion I had with Debra Ruh, Neil Milliken, and David Banes (Chairperson of the Equitable AI Alliance), we explored an important question: How can we ensure that AI includes everyone—especially the outliers? David shared an insightful perspective: "If you build to include the outliers, you include everybody in your planning." This concept challenges developers to focus on those who don’t fit the “average” or “mainstream” patterns, ensuring that AI systems are inclusive, equitable, and free from bias. The Equitable AI Alliance, an initiative by Zero Project and Seneca Trust, is working to: - Amplify opportunities that AI offers to people with disabilities. - Address risks and biases that may exclude or disadvantage certain groups. - Promote co-design by involving people with lived experience of disabilities from the start—not just as testers, but as collaborators in AI development. David also highlighted the importance of breaking out of the echo chamber and placing disability and inclusion on the agendas of mainstream conferences in technology, education, employment, and healthcare. While progress has been made at events like MWC Barcelona and SuperAI & Robotic Tech Conference, inclusion often remains a lower priority, even at diversity and inclusion conferences. Key challenges discussed: 1 - Transparency: Understanding how AI processes data and makes decisions is essential to identifying and addressing bias. 2 - Data and Privacy: Balancing the privacy of individuals with disabilities while ensuring datasets fairly represent them is a complex but vital task. 3 - Global Perspectives: Definitions of “inclusion” vary by culture, and AI must account for these differences to create solutions that work for everyone. The Equitable AI Alliance has created a freely available Resource Hub to help organisations build capacity and advocate for accessible AI. Their webinars and LinkedIn Disability Inclusive AI group are additional ways to connect and collaborate. As David said, "AI can amplify existing problems if we’re not intentional about inclusion." Let’s work to ensure AI benefits everyone—especially those historically left out. #AI #Inclusion #Accessibility #EquitableAI #AIEthics
-
𝗪𝗵𝗲𝗻 𝗔𝗜 𝗦𝘁𝗮𝗿𝘁𝘀 𝗧𝗲𝗮𝗰𝗵𝗶𝗻𝗴, 𝗪𝗵𝗼’𝘀 𝗦𝘁𝗶𝗹𝗹 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴? AI is entering classrooms faster than ethics can follow. For some learners, it’s a breakthrough. For others, a blind spot. And for educators, it’s a wake-up call: personalization is no longer optional. If we don’t design AI to understand difference, it will quietly standardize it and kill inclusion in plain sight. At its heart, this whitepaper by Stanford Accelerator for Learning asks a powerful question: Can we build AI that not only teaches better but understands what it means to learn differently? Signals every educator and policymaker should pay attention to: ⏭️ 𝗧𝗵𝗲 𝗲𝗺𝗽𝗮𝘁𝗵𝘆 𝗴𝗮𝗽 Most AI systems learn from data, not from difference. When algorithms can’t detect learning diversity, they amplify bias instead of breaking it. ⏭️ 𝗜𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻 𝗯𝘆 𝗱𝗲𝘀𝗶𝗴𝗻, 𝗻𝗼𝘁 𝗯𝘆 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁 Accessibility is the foundation of education. Designing AI for all learners starts with understanding how minds differ, not just how they perform. ⏭️ 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗮𝘀 𝗽𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 AI in education must balance personalization with dignity. Data about learning differences is deeply human and must be guarded like it. ⏭️ 𝗧𝗲𝗮𝗰𝗵𝗲𝗿𝘀 𝗮𝘀 𝗮𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝗿𝘀, 𝗻𝗼𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁𝘀 AI can analyze progress, but only humans can nurture potential. Educators remain the emotional core of every digital classroom. ⏭️ 𝗧𝗵𝗲 𝗻𝗲𝘄 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗼𝗳 𝘄𝗲𝗹𝗹𝗯𝗲𝗶𝗻𝗴 AI should lighten the cognitive load, not reshape how children see themselves. Technology that forgets the human heart will fail the human mind. ⏭️ 𝗣𝗼𝗹𝗶𝗰𝘆 𝘄𝗶𝘁𝗵 𝗽𝘂𝗿𝗽𝗼𝘀𝗲 Governments and institutions must lead with accessibility standards that make inclusive AI the baseline, not the exception. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 We’re entering an era where learning isn’t defined by systems but by students. A future without learning boundaries begins with this truth that AI must learn from human diversity before it can teach to it. 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝘆𝗼𝘂 What would education look like if every learner regardless of ability had an AI ally designed just for them? #AI #Education #DigitalInclusion #LearningDifferences #AIEthics #ResponsibleAI #TechforGood 🔻Repost if you believe inclusivity should be the first principle of AI. 🔺Follow for more insights on ethical AI, digital inclusion and the future of learning.
-
AI Is Becoming the New Gatekeeper of Disability Exclusion Everyone is celebrating AI’s productivity: automation, acceleration, optimization. But something critical is being missed. For millions of people with disabilities, AI is no longer just a tool. It is rapidly becoming a decision-maker. And in 2026 and beyond, that shift changes everything. AI is now embedded across hiring systems, education platforms, transport networks, healthcare triage, and public services. It is no longer optional. It is infrastructure. Which means: If it is not accessible, exclusion is no longer occasional; it becomes systemic. We are already seeing the signals: • Interfaces not navigable with assistive technologies • Speech systems failing non-standard speech patterns • Hiring algorithms filtering out qualified disabled candidates • Automated captions distorting meaning or erasing context • Image recognition misinterpreting disability-related cues • Autonomous mobility systems ignore diverse users • Behavior-scoring tools penalizing neurodivergent individuals These are not isolated issues. They are early indicators of a structural shift. For decades, accessibility has focused on the built environment, ramps, signage, and physical access. But the next disability divide will not be visible. It will be algorithmic. Invisible barriers embedded in data, models, and decision logic, scaling instantly, globally, and silently. This moment is urgent. Once AI systems deploy at scale, retrofitting inclusion becomes exponentially harder, and the harm multiplies. Accessibility can no longer be an afterthought. It must be a foundational requirement in AI design, procurement, governance, and regulation. This is not just a technical issue. It is a human rights issue. An economic issue. A societal stability issue. And above all, it is a leadership issue. We do not need more AI. We need inclusive AI. Designed with people with disabilities. Tested by people with disabilities. Governed with accountability to people with disabilities and their DPOs. Because the future is already being coded. The question is simple: Will AI expand inclusion, or automate exclusion at scale? If disabled people are not designing AI, AI will automate discrimination at scale. Inclusive AI is not just ethical, it is a driver of innovation, unlocking talent, insight, and better solutions for everyone. To leaders, innovators, #DPOs, #NGOs, and allies across LinkedIn: Make “Nothing About Us Without Us” the standard for AI. How is your organization building inclusive AI today? #WeAreBillionStrong #DisabilityInclusion #AIForGood #InclusiveDesign #DigitalInclusion #FutureOfWork #EthicalAI #NothingWithoutUs Debra Ruh Steve Tyler
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development