Too many examples of healthcare organizations ignoring ethics for innovation are popping up. Risking negative implications on patients. The ones healthcare is here to support. Numbers from a recent WHO report show that many countries lack ethical guidelines and risk assessments for AI in healthcare (https://lnkd.in/e7-fKYEr). Studies have shown that hospitals are not validating models locally before deployment (https://lnkd.in/eD4dJccf). Risking bias Reducing health equity Risking patient safety Digital health technologies also don't meet the minimum clinical safety and legal requirements (https://lnkd.in/eHcQhkMe). Meaning that healthcare organizations are implementing tools without confirming whether they are safe to use. Again, impacting patient risks. These are not isolated cases. They are a trend. Where ethics is taking the backseat. In the race for innovative solutions, it's essential to be aware of the ethical dilemmas that could undermine our progress. So, how do we make sure ethical deployment of AI? Here are 6 key aspects to get you going. 1️⃣ Start Ethical: Integrate ethical considerations from day one, prioritizing data security, patient well-being and ethical standards. 2️⃣ Bias Awareness: Understand and address data and algorithmic biases to prevent skewed outcomes and safeguard patient care. 3️⃣ Guidelines for Ethical Data: Establish clear guidelines for ethical data collection, conducting regular audits to maintain integrity. 4️⃣ Transparency Matters: Ensure transparency and explainability of tools to build trust among stakeholders and encourage accountability. 5️⃣ Diverse Teams: Build diverse and ethically aware AI development teams to mitigate oversight in ethical decision-making. Include stakeholders such as: Patients Clinical staff Administrative staff Technology providers Organizational leadership AI solutions developers and data leads 6️⃣ Identify and Mitigate Risk Identify and evaluate risks, such as potential adverse events. Are the risks proportionate to the benefits? Involve strategies to mitigate the potential risks. 7️⃣ Continuous Monitoring: Regularly monitor for stability, output consistency, and ongoing performance. Making sure that no patient groups will be negatively impacted. I don't want to live in a world where ignore risk detection for patients is the norm. Yes, sometimes the positive impact outshines the risk. But that does not make it okay to ignore the potential risks. What are you doing to ensure ethical deployment of AI in your organization?
Ethical Data Reporting
Explore top LinkedIn content from expert professionals.
-
-
Our paper on transparency reports for large language models has been accepted to AI Ethics and Society! We’ve also released transparency reports for 14 models. If you’ll be in San Jose on October 21, come see our talk on this work. These transparency reports can help with: 🗂️ data provenance ⚖️ auditing & accountability 🌱 measuring environmental impact 🛑 evaluations of risk and harm 🌍 understanding how models are used Mandatory transparency reporting is among the most common AI policy proposals, but there are few guidelines available describing how companies should actually do it. In February, we released our paper, “Foundation Model Transparency Reports,” where we proposed a framework for transparency reporting based on existing transparency reporting practices in pharmaceuticals, finance, and social media. We drew on the 100 transparency indicators from the Foundation Model Transparency Index to make each line item in the report concrete. At the time, no company had released a transparency report for their top AI model, so in providing an example we had to build a chimera transparency report with best practices drawn from 10 different companies. In May, we published v1.1 of the Foundation Model Transparency Index, which includes transparency reports for 14 models, including OpenAI’s GPT-4, Anthropic’s Claude 3, Google’s Gemini 1.0 Ultra, and Meta’s Llama 2. The transparency reports are available as spreadsheets on our GitHub and in an interactive format on our website. We worked with companies to encourage them to disclose additional information about their most powerful AI models and were fairly successful – companies shared more than 200 new pieces of information, including potentially sensitive information about data, compute, and deployments. 🔗 Links to these resources in comment below! Thanks to my coauthors Rishi Bommasani, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang at Stanford Institute for Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton Center for Information Technology Policy
-
What do we mean by data ethics, and why does it matter for responsible AI? When Anna-Maria Martini and I conceptualized this series, we deliberately chose the term ethics not law. When we talk about data ethics, we don’t mean abstract theory. We mean the practical principles, shaped by cultural norms and reflected in social and legal traditions, that guide how organizations act when rules alone do not provide a clear answer. This is often captured through the concept of reasonableness: How would a reasonable person expect an organization to act, given the competing values at stake? Because ethics is never binary. It is the discipline of navigating trade-offs: privacy vs. personalization, speed vs. accuracy, efficiency vs. accountability, especially when leaders must decide under uncertainty. AI amplifies the strengths and weaknesses of the data it relies on. That makes ethical deliberation foundational to: ➡️ Data quality, context, and representativeness ➡️ Fair and explainable outcomes ➡️ Reliable monitoring and auditability ➡️ Trust with customers, employees, and regulators How can leaders make ethical deliberation actionable? ✅ Define a shared ethical frame: What does “reasonable” data use look like in your context? Which values matter most when trade-offs arise? ✅ Identify legal constraints: In some cases regulation does provide clear boundaries within which organizations are asked to operate. Where it does not, document the reasoning behind your choices. ✅ Define roles and responsibilities: Who decides what data to collect, which use cases are appropriate, and how boundaries are set? ✅ Integrate ethics into design: Bring privacy and governance into early discussions, including business strategy, technical evaluations, and vendor selection. ✅ Align functions around a shared framework: Ethics becomes operational when business, legal, and technical leaders make decisions based on the same set of assumptions. Note that ethical deliberation needs to extend into the software development process in order to be effective. Business leaders might decide that they want to implement a solution; how a solution is implemented is, however, often left to engineers. Software engineers must thus also be trained in ethical deliberation, particularly when it comes to the development of AI systems, where the risks of implicit assumptions and hidden values loom large. An excellent academic discussion of the importance of ethical deliberation in software engineering by Dr. Jan Gogoll, Dr. Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner, and Julian Nida-Rümelin can be found here: https://bit.ly/452SkUZ. As the paper points out, "Since ethical deliberation requires a willingness to invest time and resources, a company has to encourage and support its engineers to consider ethical issues and discuss different ways to develop a product." This, too, is ultimately a leadership decision. #ResponsibleAI #DataEthics #AIGovernance #Leadership
-
📢 NEW REPORT:👉Trust but Verify: A Guide to Conducting Due Diligence When Leveraging Non-Traditional Data in the Public Interest 🤔 Non-traditional data—from social media data to #mobile usage patterns— can offer unique insights for tackling societal challenges like #disaster response, healthcare, and environmental protection. 🔎However, alongside these opportunities come significant ethical, legal, and operational risks. ➡️ Our latest report introduces a six-step framework to help #datastewards of organizations navigate the complexities of responsible data collaboration 💡 Key Sections: ✅Why Non-Traditional Data Matters: Real-time responsiveness, filling data gaps in underserved areas, and enriching traditional datasets. ✅The Need for Due Diligence: Managing risks like security, bias, and consent while respecting local norms and global regulations. ✅A Step-by-Step Framework: From scoping and risk ranking to ongoing audits, the guide helps organizations ensure responsible collaboration. 🌟 This guide is essential for anyone working at the intersection of data, governance, and public good, whether you're in government, academia, private enterprise, or civil society. 🔗 Read "Trust but Verify: A Guide to Conducting Due Diligence When Leveraging Non-Traditional Data in the Public Interest" (✍️co-authored by Sara Marcucci, Andrew Zahuranec, and Stefaan Verhulst): https://lnkd.in/eii4Zg3y ➡️ Blog (Executive Summary): https://lnkd.in/eytwRYiV 💻 Check out also our other tools and resources on data collaboration: https://lnkd.in/e3pEh3qn #DataEthics #DataGovernance #PublicInterestTech #data4good #health #environment #publicpolicy
-
Everyone talks about “training AI” But few focus on where the training data actually comes from. OECD’s latest report “Mapping Relevant Data Collection Mechanisms for AI Training” reinforces a crucial point ↳ The future of AI won’t just depend on smarter algorithms, but on more ethical data. The #OECD taxonomy shows the complex web of data collection mechanisms powering today’s #AI models ↳From direct user interactions and voluntary data donations ↳To open data initiatives, commercial data licensing, and ↳Large-scale web scraping. Now, why does this matter? Because how we collect data determines how much we can trust the AI systems built on it. The report highlights three key takeaways: ↳AI developers often combine multiple data sourcing methods simultaneously. ↳Privacy, #datagovernance, and intellectual property rights must evolve alongside these practices. ↳Emerging tools like Privacy-Enhancing Technologies (PETs) offer new ways to make AI training both innovative and responsible. Let this be a core consideration in your AI strategy. #DataGovernance #ResponsibleAI #EthicalAI #PrivacyTech #TrustworthyAI
-
📈 📲 The rapid growth of wearable and app derived health data has outpaced our consent infrastructure. A new paper offers one of the clearest attempts to close that gap. A perspective from Stefanie Brückner, Stephen Gilbert, & colleagues, presents a thoughtful framework for responsible use of health app and wearable data in research. As funders and regulators expect stronger transparency and participant centered governance, models like this will be important for future approval pathways and for the long term sustainability of digital research. Many EU-based efforts related to electronic health records are moving toward opt out structures for secondary use. This may work for clinical data collected inside health systems but is not appropriate for data generated through wearables and consumer apps. #PGHD are created voluntarily, outside clinical care, and often on self purchased devices. For this category, the European Data Protection Board has argued that explicit and informed consent is necessary. The framework proposed here is designed for that need. The authors introduce a user driven consent platform that gives individuals a consistent way to decide how their data are shared across apps, clinical systems, and research. As patient generated data become central to public health, clinical trials, and population research globally, this work addresses a foundational gap. Key themes: 🔐 Granular and revocable consent Participants can specify which types of data can be used for personal care or research, update preferences at any time, and rely on pseudonymized identifiers. 📑 Alignment with governance structures Standardized, informed, and revocable consent supports the General Data Protection Regulation and the emerging European Health Data Space, and it provides the clarity global regulators seek in real world evidence. 🔗 Interoperability The platform uses HL7 FHIR and open identity standards, enabling integration with electronic health records and digital health services. This supports international research and ethical data sharing. 🤝 A stronger foundation for trust Transparent governance and clear communication are essential for long term engagement and for high quality datasets. Open Access Paper 🔗 https://www.nature.com/articles/s41746-025-02147-3 At GSD Health Research we are building large scale cohort studies that rely on participant generated data including wearable streams and patient reported outcomes. Our work depends on trust and clarity. This perspective illustrates how consent infrastructure can support ethical real world evidence and accelerate discovery in ways that respect the people who make research possible. Thank you to the full author team for a timely contribution. #digitalhealth #clinicalresearch #realworlddata #datagovernance #PGHD
-
A reviewer once asked a student I mentor a brutal question: “Could someone reproduce your study from this section alone? ” She froze. So did I. I realized many of us learn how to analyze data but not how to write a methods section that stands up to scrutiny. So I spent two weeks auditing 40 published papers and 20 drafts across fields. Here’s what I found—and what I now teach on day 1. the 5 essentials that protect your credibility 1. research design State qualitative, quantitative, or mixed—and why this design fits the question. 2. sampling strategy Population, frame, technique (random/stratified/purposive), n, and rationale. 3. data collection Who collected what, where, when, and with which instruments. Include reliability/validity. 4. data analysis Preprocessing, software, exact tests or coding approach. Link each method to a specific RQ or hypothesis. 5. ethics and transparency Approvals/consent, anonymization, preregistration (if any), and data/code availability. the 6 risky shortcuts I keep seeing – naming a design without justifying it – reporting n but not how the sample was selected – saying “we used a survey/interviews” without procedures or instruments – listing software without methods or parameters – “we analyzed the data” with no steps or links to RQs – omitting ethics, limitations, or data access the pattern is clear safe: methods that someone else could rerun from your description alone risky: vague labels, missing procedures, and unlinked analyses the template I give every researcher Design: [type] because [reason tied to RQ]. Sample: [frame], [technique], n = [size], power/logic = [brief]. Collection: [instrument/procedure], who/where/when = [details], quality = [reliability/validity]. Analysis: [software + methods + parameters] → answers [RQ/H]. Ethics: [approval/consent], data/code = [link/conditions], limits = [brief]. Paste this into your draft, fill the brackets, and your methods will pass the reproducibility test. Save this post if your methodology is next on your writing list — and share it with a friend who’s revising. ——————————————————————— Follow me 👉 https://lnkd.in/d4b-t6b3 60k+ follow me here—but only a few read The Hybrid Researcher Be one of them 👉 https://lnkd.in/dMB8YJgm Connect on all platforms 👉 https://tr.ee/yEg4hY
-
Monitoring and evaluation hinge not only on data—but on the precision and ethics behind how that data is collected, analyzed, and applied. This chapter of the IOM Monitoring and Evaluation Guidelines goes far beyond methodology checklists. It provides a robust and nuanced framework for data integrity in humanitarian and development settings, weaving together technical rigor, ethical obligations, and field-tested strategies. Whether designing a desk review or calculating the ideal sample size, it empowers M&E professionals to move from mechanical data collection toward strategic, context-aware decision-making. – It covers ethical standards and how to handle political influence in evaluation work – It explains how to plan, design, and adapt data collection tools for different contexts – It outlines both qualitative and quantitative methods, including triangulation for enhanced credibility – It walks through sampling strategies, measurement levels, and data management, cleaning, and analysis This is not a procedural manual—it’s an essential reference for evaluators, researchers, and humanitarian professionals seeking to navigate complex environments with accountability, rigor, and relevance. With this guide in hand, M&E becomes more than a function—it becomes a disciplined commitment to truth, quality, and responsible impact.
-
We’re constantly told to "let the data speak for itself." And nothing seems more objective than a crisp bar chart, a sleek line graph, or a vibrant scatter plot. They appear to be pure, unadulterated truth, direct from the numbers. But here’s the quiet secret: every single chart we create is a carefully constructed narrative, and we, the data practitioners, are its authors. Think about it. We choose the scale. We pick the colors. We decide what to include, and perhaps more importantly, what to exclude. A slight adjustment to the Y-axis can turn a flat line into a dramatic spike. A strategic choice of segmenting data can highlight a trend that barely exists. We can make a molehill look like a mountain, or a mountain disappear entirely, all with the click of a mouse. This isn't always malicious; often, it’s an unconscious bias or an attempt to simplify complexity. But it underscores a profound truth: data visualization is less about simply "showing" data and more about "telling a story" with data. And like any good storyteller, we have immense power to influence perception, to persuade, and yes, even to mislead. The real challenge, and the true mark of an ethical data professional, isn't just in making a chart look pretty. It's in ensuring that the story it tells is honest, transparent, and reflective of the underlying reality, even when that reality isn't what stakeholders want to hear. Because a picture might be worth a thousand words, but only if those words are true. Let’s chart with integrity. #TheInsightEdge #DataScience #DataVisualization #DataEthics
-
If they're nodding silently… Something's very broken here! Parents 𝑠ℎ𝑜𝑢𝑙𝑑𝑛'𝑡 need a PhD to read therapy reports. But most reports look like this: "Client demonstrated 73% accuracy across 3 consecutive probe sessions on tacting common items in the natural environment with varied SDs." What they should actually say: "Your child named 7 out of 10 everyday objects when asked 'What is this?' across 3 days this week." Same data. Actually understandable. I've reviewed hundreds of therapy reports. And here's what I see too often: ↝ Heavy jargon that excludes families. ↝ Graphs without context. ↝ Data points without meaning. ↝ Progress buried in technical language. This isn't about credentials. It's about partnership. When families can't understand the data, they can't: ↦ Celebrate meaningful wins ↦ Ask informed questions ↦ Reinforce skills at home ↦ Advocate effectively ↦ Make confident decisions And that breaks collaboration. Here's what clear progress tracking looks like: ↠ 𝐏𝐥𝐚𝐢𝐧 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐬𝐮𝐦𝐦𝐚𝐫𝐢𝐞𝐬. Jordan went from pointing to using single-word vocalizations to request approximately four of his preferred snack item in 1 month. ↠ 𝐕𝐢𝐬𝐮𝐚𝐥 𝐩𝐫𝐨𝐠𝐫𝐞𝐬𝐬 𝐨𝐯𝐞𝐫 𝐭𝐢𝐦𝐞. Simple line graphs showing skill growth week by week. ↠ 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐧𝐮𝐦𝐛𝐞𝐫𝐬. "80% accuracy means Jordan is ready to practice this skill in new places." ↠ 𝐍𝐞𝐱𝐭 𝐬𝐭𝐞𝐩𝐬 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐞𝐝 𝐜𝐥𝐞𝐚𝐫𝐥𝐲. "We're working on using these phrases with grandparents and at preschool." ↠ 𝐏𝐚𝐫𝐞𝐧𝐭-𝐟𝐫𝐢𝐞𝐧𝐝𝐥𝐲 𝐝𝐞𝐟𝐢𝐧𝐢𝐭𝐢𝐨𝐧𝐬. If technical terms are needed, define them once in simple language. The goal isn't to avoid data. It's to make data accessible. Because informed families are empowered families. And empowered families drive better outcomes. What ethical reporting includes: ↝ Clear language that respects family intelligence. ↝ Data presented with context and meaning. ↝ Transparent tracking that shows what's working. ↝ Next steps that families can understand and support. When we prioritize clarity, we prioritize partnership. And that's when real progress happens. If you're a parent struggling to decode reports, ask your provider for plain language summaries. You deserve to understand your child's progress. If you're a provider, challenge yourself: Could a family member read this and feel informed? That's the standard. ______________________________________________________ DM me if you want to discuss creating family-centered reporting systems. 💖 ♻️ Repost to Reshape ✨ Follow Dr. Cécile Heinze ✨ ______________________________________________________ Disclaimer: The examples mentioned are general illustrations of reporting practices and do not reference any specific client or case without proper consent. #Neurodiversity #AutismSupport #TherapyTransparency #FamilyCenteredCare #DataLiteracy #ProgressTracking #InclusivePractice #ParentPartnership
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development