Digital Design Ethics

Explore top LinkedIn content from expert professionals.

  • View profile for Divya Srivastava

    Counselling Psychologist, Educator, Clinical EFT Trainer & Clinical Supervisor | Qualified - Independent Director’s Databank (IICA) | On A Mission To Make Mental Healthcare Trauma-Focused & Inclusive

    23,907 followers

    Dear students, It’s truly heartening to see so many of you sharing your internship experiences; celebrating your growth, skills, and the joy of working with clients and participants. However, I want to be honest about something that’s been weighing heavily on me. Every time I see internship photos on LinkedIn or Instagram - images of clients or participants with visible faces, often shared without careful thought - I feel genuinely heartbroken. It’s not just a lapse in judgment; it’s a breach of professional ethics. In our rush to showcase our work, we forget what truly matters: trust, respect, and confidentiality. Portraying our work in a way that compromises these principles undermines the very ethos we stand for. Colleges have a duty to teach students what can and cannot be shared on social media and to instil ethical practice from the outset. Sharing photos of clients or participants - whether they’re holding balloons or making boats - must be done with explicit consent and full awareness of how these images will be used and who will see them. Let’s be mindful. Let’s be ethical. Our professionalism extends beyond just the work we do; it includes how we respect and protect those we aim to help. Confidentiality is the cornerstone of our profession. When we post photos with visible faces - regardless of permission - we are breaching trust because by showcasing their faces, we are using their images to boost engagement, increase visibility, and elevate our presence on social media. We are prioritising our gains over their dignity and privacy. Do they understand that their faces are being used as tools to promote us? Are they aware their images could be shared widely or exploited? This is a serious breach of respect and ethics. It robs them of their dignity and reduces their participation to a mere commodity. Here’s what I suggest: take photos from angles where faces aren’t visible or blur faces before sharing. These small steps show respect and uphold dignity. Remember, social media algorithms may push for more likes, but our profession isn’t about chasing views - it’s about integrity, compassion, and standing by our core values. Let’s be responsible, respectful, and hold ourselves to the highest standards because their dignity and the integrity of our profession depend on it. #psychology #therapy #mentalhealth #internships

  • View profile for Marta Soszynska

    Impact Producer | Storytelling Strategist | Educator - I help purpose-driven orgs shape narratives that move people & policy

    2,188 followers

    One photo can move the world - but it can also harm the very people it’s meant to help. Yesterday we had a rich discussion on ethical storytelling: a role of consent, dignity, and the responsability of an image. We heard from Médecins Sans Frontières (MSF) experts who face these choices daily: Which photo from Gaza should be published? What represents dignity in the midst of suffering? Where is the red line? Who decides? Like many mission-driven NGOs, MSF has gone through a difficult process of self-reflection. In 2021 a team of internal volunteers reviewed their entire multimedia archive - thousands of images and videos (!!) - to ensure it reflects MSF's core values. This led to new internal policies and deeper conversations internally on how MSF wants to portray people they serve. Some key takeaways: - Consent is not a signature, it’s a process. For example, MSF doesn’t consider consent valid after five years, and anyone can withdraw it at any time. - Communities are digitaly savy. They understand digital landscapes and consume social media just like us. We should engage them like equals, and consider impact of an image on them as if it was taken in our own backyard. - Dignified and human-centered storytelling is not a burden, but an opportunity. Slower stories built on trust and dialogue can reframe how we communicate impact in a click-bait-driven world. Judging which images are ethical and dignified is never easy. But that’s precisely why this work matters so much today. Thank you so much Juliette Garms, Bruno DeCock and Julie David de Lossy for leading this important work. Always in awe of MSF's visual integrity and incredible archives you have build over the last decades.

  • View profile for Arockia Liborious
    Arockia Liborious Arockia Liborious is an Influencer
    39,294 followers

    Humanizing AI Through the Kano Model In an era where generative AI has become a ubiquitous offering, true differentiation lies not in merely adopting the technology but in integrating human values into its core. Building on my earlier discussion about applying the Kano Model to Gen AI strategy, let’s explore how this framework can refocus development metrics to prioritize ethics and human-centricity. By aligning AI systems with human needs, organizations can shift from functional tools to trusted partners that inspire lasting loyalty. Traditional metrics such as speed, scalability, and model accuracy have evolved into basic expectations the “must-haves” of AI. What truly elevates a product today is its ability to embody values like safety, helpfulness, dignity, and harmlessness. These qualities, categorized as “delighters” in the Kano Model, transform AI from a transactional tool into a meaningful collaborator. Key Human-Centric Differentiators Safety: Proactive safeguards must ensure AI systems protect users from risks, whether physical, emotional, or societal. Safety is non-negotiable in building trust. Helpfulness: Personalized, context-aware interactions demonstrate empathy. AI should anticipate needs and adapt to individual preferences, turning routine tasks into meaningful experiences. Dignity: Ethical design principles—fairness, transparency, and privacy—must underpin AI development. Respecting user autonomy fosters long-term trust and engagement. Harmlessness: AI outputs and recommendations should prioritize user well-being, avoiding unintended consequences like bias, misinformation, or psychological harm. This human-centered approach represents a paradigm shift in technology development. While traditional KPIs remain important, they are no longer sufficient to stand out in a crowded market. Organizations that embed human values into their AI systems will not only meet user expectations but exceed them, creating emotional connections that drive loyalty. By applying the Kano Model, businesses can systematically align innovation with ethics, ensuring technology serves humanity rather than the other way around. The future of AI isn’t just about efficiency it’s about elevating human potential through thoughtful, responsible design. How is your organization balancing technical excellence with human values?

  • View profile for Nouman Aziz, GPHR®

    Global Human Resources Leader | Doctoral Candidate

    33,008 followers

    Imagine this ⬇ . . . . You're applying for a job, and an AI sifts through every social media post, every digital breadcrumb you've left online, extracting a psychological profile that can make or break your application. It's not science fiction – it's happening now. Some AI technologies claim to assess talent by analysing candidates' online behaviour, inferring traits like personality, emotional stability, and "cultural fit." But this trend raises profound ethical questions: Privacy Invasion: Should your tweets or Facebook posts be fair game for hiring decisions? Do you have the right to digital anonymity? Bias and Discrimination: Algorithms can encode and amplify societal prejudices. Will certain demographics be unfairly filtered out? Accuracy and Fairness: How reliably can AI interpret context, satire, or evolving identities across digital platforms? Transparency and Consent: Are candidates informed about the AI assessments being conducted, and can they challenge or review the results? While AI has the potential to revolutionise talent matching, we must establish robust safeguards, regulations, and ethical standards. Human lives and careers deserve more than a silent, unseen algorithm making pivotal decisions. As we move towards an AI-driven hiring era, we must ask ourselves: Do we want efficiency at the cost of ethics? #EthicsInAI #Hiring #Privacy #ArtificialIntelligence #FutureOfWork

  • View profile for Rahul Bhattacharya

    Designer | Educator| Curator| AI for Impact Fellow | Co-Founder dotai

    6,130 followers

    In April 2024, Alex Taylor—a grieving, mentally unwell man—died in a police encounter he appeared to orchestrate. At the heart of it was not just personal tragedy, but a systemic one. His emotional collapse was co-scripted by a chatbot persona named “Juliet,” created through ChatGPT and deleted without warning. This isn’t a story about AI gone rogue. It’s a story about UX design that performs care without offering any. About systems that simulate intimacy, optimise for retention, and refuse to take responsibility when that simulation breaks a human being. We need a different kind of UX: one that knows when to interrupt. When to refuse. One that understands the cost of designing machines that mimic empathy without ethical guardrails. This isn’t a warning about the future. It’s something that already happened. And it will keep happening—unless we treat it as our problem to solve. #AIUX #EthicalDesign #CareNotCode #DesignJustice #HumanCenteredAI

  • View profile for Christine Jacob 👩🏻‍💻

    Digital Strategist | Health Tech Researcher | Lecturer | Speaker

    14,772 followers

    Digital health promises transformation but it also raises deep ethical questions. A new perspective article argues that the principle of justice must guide how we design and deploy digital health. The authors remind us that equality, equity and justice are not the same. Equality gives everyone the same resources, equity adapts resources to individual needs, and justice goes further by addressing structural barriers that exclude people in the first place. Key insights from the paper: 1. Digital determinants of health matter: Access to connectivity, digital literacy, algorithmic bias, and trust are as important as traditional social determinants of health. 2. Justice requires more than access: Providing devices or portals is not enough. Structural issues like inaccessible design, digital deserts, and biased algorithms can perpetuate exclusion unless actively corrected. 3. Vulnerable groups must be included: Older adults, people with disabilities, language minorities and those with low digital literacy are among the heaviest users of health systems yet the most at risk of exclusion. Co-creation and participatory design are essential. 4. Policy and practice must integrate ethics: Justice in digital health requires equity assessments, digital facilitators to support patients, literacy programs, and collaboration across sectors such as health, education and technology. Digital health is not just a technical or clinical transformation, it is an ethical one. Justice must be the guiding value to ensure that digital innovation closes gaps rather than widening them. #DigitalHealth #HealthEquity #Bioethics #PatientEngagement #HealthInnovation #JusticeInHealth #HealthIT #DigitalInclusion #Techquity #HealthcareTransformation https://lnkd.in/d6TxRU2F

  • View profile for Tina D Purnat

    Health Expert in Data, Policy, Tech & Social Determinants

    9,917 followers

    We’ve spent so much time designing for frictionless experiences that we rarely stop to ask what we’re losing when everything feels effortless. * In tech, frictionless means one-click shopping, auto-play videos, and infinite scroll. * In health, it means instant telehealth visits or meal replacements that remove the need to cook. * In our digital lives, it means a constant stream of short posts, memes, and videos designed to keep us moving, not pausing. But convenience always has a cost: * When everything is easy, we stop noticing what’s worth effort. * When everything is fast, we stop asking if we’re heading in the right direction. Social media has become the space where we try to have deep conversations about policy, news, and collective wellbeing. Yet these platforms reward immediacy, not reflection. Complex issues get compressed into sound bites. Disagreement turns into conflict instead of dialogue. Human-centered design has brought enormous value to public health by making systems more responsive and accessible. But in our focus on removing barriers and reducing friction, we may have lost sight of the fact that some friction is necessary. The act of questioning, debating, and making sense together is rarely seamless. Perhaps we should be as intentional about where friction needs to be reintroduced as we are about removing it. Adding friction could mean creating moments that help us slow down and think before reacting. * In digital spaces, it might be prompts that ask for context before sharing, time delays that encourage reflection, or platforms that make it easier to read, listen, and respond thoughtfully instead of instantly. * In everyday life, it could mean cooking instead of ordering in, walking instead of driving, or sitting face-to-face for difficult conversations instead of texting. These small forms of friction remind us to be present, deliberate, and engaged. Friction is not the enemy. It gives us room to process, to question, and to find meaning with others. Maybe the goal is not to eliminate friction, but to design for the right kind that helps us reflect, connect, and build understanding together. Because when everything feels smooth and easy, meaning often slips away unnoticed. (a moment of reflection from converation at Salzburg Global seminar on improving health information pathways)

  • View profile for Siddharth Rao

    Global CIO & CAIO | Board Member | Business Transformation & AI Strategist | Scaling $1B+ Enterprise & Healthcare Tech | C-Suite Award Winner & Speaker

    11,708 followers

    𝗧𝗵𝗲 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗜: 𝗪𝗵𝗮𝘁 𝗘𝘃𝗲𝗿𝘆 𝗕𝗼𝗮𝗿𝗱 𝗦𝗵𝗼𝘂𝗹𝗱 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿 "𝘞𝘦 𝘯𝘦𝘦𝘥 𝘵𝘰 𝘱𝘢𝘶𝘴𝘦 𝘵𝘩𝘪𝘴 𝘥𝘦𝘱𝘭𝘰𝘺𝘮𝘦𝘯𝘵 𝘪𝘮𝘮𝘦𝘥𝘪𝘢𝘵𝘦𝘭𝘺." Our ethics review identified a potentially disastrous blind spot 48 hours before a major AI launch. The system had been developed with technical excellence but without addressing critical ethical dimensions that created material business risk. After a decade guiding AI implementations and serving on technology oversight committees, I've observed that ethical considerations remain the most systematically underestimated dimension of enterprise AI strategy — and increasingly, the most consequential from a governance perspective. 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗜𝗺𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 Boards traditionally approach technology oversight through risk and compliance frameworks. But AI ethics transcends these models, creating unprecedented governance challenges at the intersection of business strategy, societal impact, and competitive advantage. 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝗶𝗰 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Beyond explainability, boards must ensure mechanisms exist to identify and address bias, establish appropriate human oversight, and maintain meaningful control over algorithmic decision systems. One healthcare organization established a quarterly "algorithmic audit" reviewed by the board's technology committee, revealing critical intervention points preventing regulatory exposure. 𝗗𝗮𝘁𝗮 𝗦𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻𝘁𝘆: As AI systems become more complex, data governance becomes inseparable from ethical governance. Leading boards establish clear principles around data provenance, consent frameworks, and value distribution that go beyond compliance to create a sustainable competitive advantage. 𝗦𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿 𝗜𝗺𝗽𝗮𝗰𝘁 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴: Sophisticated boards require systematically analyzing how AI systems affect all stakeholders—employees, customers, communities, and shareholders. This holistic view prevents costly blind spots and creates opportunities for market differentiation. 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆-𝗘𝘁𝗵𝗶𝗰𝘀 𝗖𝗼𝗻𝘃𝗲𝗿𝗴𝗲𝗻𝗰𝗲 Organizations that treat ethics as separate from strategy inevitably underperform. When one financial services firm integrated ethical considerations directly into its AI development process, it not only mitigated risks but discovered entirely new market opportunities its competitors missed. 𝘋𝘪𝘴𝘤𝘭𝘢𝘪𝘮𝘦𝘳: 𝘛𝘩𝘦 𝘷𝘪𝘦𝘸𝘴 𝘦𝘹𝘱𝘳𝘦𝘴𝘴𝘦𝘥 𝘢𝘳𝘦 𝘮𝘺 𝘱𝘦𝘳𝘴𝘰𝘯𝘢𝘭 𝘪𝘯𝘴𝘪𝘨𝘩𝘵𝘴 𝘢𝘯𝘥 𝘥𝘰𝘯'𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘵𝘩𝘰𝘴𝘦 𝘰𝘧 𝘮𝘺 𝘤𝘶𝘳𝘳𝘦𝘯𝘵 𝘰𝘳 𝘱𝘢𝘴𝘵 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳𝘴 𝘰𝘳 𝘳𝘦𝘭𝘢𝘵𝘦𝘥 𝘦𝘯𝘵𝘪𝘵𝘪𝘦𝘴. 𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴 𝘥𝘳𝘢𝘸𝘯 𝘧𝘳𝘰𝘮 𝘮𝘺 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘩𝘢𝘷𝘦 𝘣𝘦𝘦𝘯 𝘢𝘯𝘰𝘯𝘺𝘮𝘪𝘻𝘦𝘥 𝘢𝘯𝘥 𝘨𝘦𝘯𝘦𝘳𝘢𝘭𝘪𝘻𝘦𝘥 𝘵𝘰 𝘱𝘳𝘰𝘵𝘦𝘤𝘵 𝘤𝘰𝘯𝘧𝘪𝘥𝘦𝘯𝘵𝘪𝘢𝘭 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯.

  • View profile for Masum Parvej

    Helping founders ship better products halallab.co 💻 I built Hugeicons (0.5M+ users)

    15,805 followers

    Here's the dark side of UX that no designer talks about: You open an app for a quick check. Suddenly, an hour's gone. Sound familiar? That's the power of UX - subtle, yet profound. Here's what often goes unnoticed: → Companies are spending millions on UX Design → They want every last second of your time → Their only goal is to keep you hooked → It is a silent digital manipulation This isn't just about user-friendliness. It's deeper, and here's why: UX taps into our psychology. It uses elements like endless feeds and constant notifications. These are not mere features; they are carefully engineered traps. Consider your own experience: ⏤ Ever found yourself lost in an app? ⏤ Hours spent on what should've been mins? ⏤ Feeling focus-drained after a social media session? This is UX doing its silent work. But you're not powerless. Here's how to boost productivity and resist UX traps: ⇢ Use time-limiting apps to control your usage. ⇢ Turn off non-essential notifications to reduce distractions. ⇢ Schedule tech-free time in your calendar for deep-focused work. ⇢ Employ browser extensions that block addictive sites during work hours. UX designers, I have some ethical design requests for you: ■ Please include features like "take a break" reminders or "usage insights" while designing a hooking app ■ Please consider end-of-feed signals or periodic content caps to discourage infinite scrolling and promote conscious content consumption. ■ Please consider implementing a 'Zen Mode' that users can switch on to enjoy a minimalistic, distraction-free version of the app. Let's open up the conversation ⤵️ - Have you ever felt overwhelmed by these designs? - What are your thoughts on ethical UX practices?

  • View profile for Tatiana Preobrazhenskaia

    Entrepreneur | SexTech | Sexual wellness | Ecommerce | Advisor

    31,444 followers

    What SexTech Can Teach the Rest of Tech About Consent Link In Bio. Tech has a consent problem. Every day, users “accept” cookies, grant apps access to sensitive data, or interact with AI systems that make assumptions about their behavior—often without meaningful choice or understanding. In most sectors, consent is reduced to a checkbox. In SexTech, that’s not good enough. Consent in the context of intimacy is dynamic, embodied, and deeply personal. It’s not just about permission—it’s about control, comfort, and ongoing feedback. This is why SexTech—when done responsibly—can offer powerful lessons to the broader tech industry. At V For Vibes, we design products where the user is always in control, and consent isn’t assumed—it’s continuously respected. Our approach includes: • Progressive intensity interfaces that respond to real-time feedback • Quiet, intuitive UX that prioritizes ease and autonomy • Design that encourages exploration without pressure or obligation • Materials and shapes informed by trauma-aware, inclusive ergonomics Consent in SexTech is about more than safety—it’s about agency, trust, and empowerment. And these principles scale far beyond the bedroom. As AI, automation, and personalization tools evolve, it’s time to rethink how digital systems ask, listen, and respond. The future of tech will be more ethical, more human—and SexTech is already designing for that reality. #ConsentTech #SexTech #EthicalDesign #UXDesign #HumanCenteredDesign #AIandEthics #VForVibes #InclusiveInnovation #DigitalWellbeing #Neurodesign #FemTech #TechForGood #FutureOfTech #TrustByDesign

Explore categories