Research Ethics Frameworks

Explore top LinkedIn content from expert professionals.

Summary

Research ethics frameworks are structured guidelines that help researchers ensure their work is conducted responsibly, transparently, and with respect for participants and data integrity. These frameworks are increasingly important as technologies like AI and digital health tools become central to research, raising new questions about honesty, consent, and accountability.

  • Prioritize clear disclosure: Always communicate openly about the use of AI tools, data sources, and research methods to maintain trust and transparency with all stakeholders.
  • Update consent practices: Adapt consent procedures so that participants understand exactly how their data will be used, especially when using wearable devices, apps, or AI-driven analysis.
  • Build ethics into workflows: Make ethics checks and documentation part of every research step, from project design to publication, to uphold values like fairness, honesty, and respect even in urgent or rapidly evolving situations.
Summarized by AI based on LinkedIn member posts
  • View profile for Shalini Rao

    Founder at Future Transformation and Trace Circle | Certified Independent Director | Sustainability | Circularity | Digital Product Passport | ESG | Net Zero | Emerging Technologies |

    7,904 followers

    AI isn’t assisting science anymore. It’s 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝗻𝗴 it. But what if the 𝗮𝘂𝘁𝗵𝗼𝗿 𝗵𝗮𝘀 𝗻𝗼 𝗰𝗼𝗻𝘀𝗰𝗶𝗲𝗻𝗰𝗲? 𝗜𝘁 𝗳𝗮𝗸𝗲𝘀 𝗰𝗶𝘁𝗮𝘁𝗶𝗼𝗻𝘀. 𝗥𝗲𝘄𝗿𝗶𝘁𝗲𝘀 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀. 𝗗𝗿𝗮𝗳𝘁𝘀 𝗴𝗿𝗮𝗻𝘁𝘀. All before you blink. This isn’t progress. It’s precision without principle. Truth now comes 𝗽𝗿𝗲-𝘁𝗿𝗮𝗶𝗻𝗲𝗱. And peer review can’t keep up. We’re not 𝘀𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗶𝗻𝗴 𝘀𝗰𝗶𝗲𝗻𝗰𝗲. We’re 𝘀𝗵𝗼𝗿𝘁-𝗰𝗶𝗿𝗰𝘂𝗶𝘁𝗶𝗻𝗴 𝗶𝘁. And with no intervention, the tools don’t just drift, they 𝗱𝗶𝘀𝘁𝗼𝗿𝘁 𝘁𝗵𝗲 𝘃𝗲𝗿𝘆 𝗶𝗱𝗲𝗮 𝗼𝗳 𝘁𝗿𝘂𝘁𝗵. The European Commission’s whitepaper isn’t just regulation. It’s a firewall for scientific integrity. For those funding, governing, or scaling AI in research, it’s the baseline for trust, accountability, and future-proof discovery. It’s a must-read. And a call to act.....now. 🔸 Why These Guidelines Matter ➝ GenAI speeds discovery but magnifies risk. ➝ Disinformation and IP abuse are rising. ➝ Trust, transparency, and accountability are non-negotiable. 🔸 Guiding Principles ➝ Reliability: Keep research solid and reproducible. ➝ Honesty: Always disclose AI use. ➝ Respect: Protect data, people, and systems. ➝ Accountability: Humans remain responsible. 🔸 For Researchers ➝ Own every AI-supported output. ➝ Disclose tools used clearly. ➝ Don’t upload sensitive data. ➝ Cite properly. No plagiarism. ➝ Don’t use AI in reviews or evaluations. 🔸 For Research Organisations ➝ Train everyone across roles. ➝ Encourage disclosure without fear. ➝ Track how AI is used internally. ➝ Offer secure, local GenAI tools. ➝ Build this into your ethics policies. 🔸 For Funding Bodies ➝ Link funding to responsible AI use. ➝ Make disclosure a must. ➝ Ban AI in scientific reviews. ➝ Use GenAI responsibly in operations. ➝ Fund ethics training widely. 🔸Research Integrity ➝ Uphold ALLEA’s Code of Conduct: Quality Transparency Fairness Societal Responsibility 🔸Trustworthy AI Pillars ➝ Respect human autonomy ➝ Prevent harm ➝ Ensure fairness ➝ Prioritise explicability ➝ Ensure oversight, privacy, and transparency. 🔸 Evolving Together ➝ These guidelines will evolve. ➝ Updates will track tech and policy shifts. ➝ Community input is welcome. 🔸 Key Takeaways ➝ GenAI should support not steer research. ➝ Disclosure builds trust, not risk. ➝ Researchers, institutions, and funders must align. Bottom Line In research, credibility is everything. GenAI can support it but only when used with care, clarity, and conscience. Alex Wang Cobus Greyling Hr. Dr. Takahisa Karita Sarvex Jatasra Lewis Tunstall Martin Roberts, Michael Spencer Pascal BORNET Dr. Ram Kumar G, Ph.D, CISM, PMP Pavan Belagatti Rafah Knight JOY CASE Sara Simmonds Prasanna Lohar #AI #GenAI #AIinResearch #TrustworthyAI #EthicalAI #Research #Researchers 🔺 Looking to engage with insights that matter? 🔺 Follow Shalini Rao

  • View profile for Dr Mike Perkins

    GenAI researcher | Head, Centre for Research & Innovation | Associate Professor

    7,106 followers

    🤔 Curious about using GenAI in your research but worried about crossing ethical lines? When do you need to disclose your use of these tools? Drawing on the European Code of Conduct for Research Integrity (ALLEA) principles, our latest preprint explores how GenAI tools might impact every stage of the research workflow, from initial proposal writing through to peer review. The team has a broad range of disciplines and some very differing views about GenAI, and we think this has been a key strength in helping us map out the key ethical challenges involved. We show: 📊 Detailed analysis of ethical issues across 8 distinct research phases 🚫 Critical evaluation of which tasks GenAI should (and shouldn't) be used for ✅ Evidence-based recommendations for maintaining research integrity while taking advantage of AI capabilities Many thanks to the whole team: Sonja Bjelobaba, Lorna Waddington,Tomas Foltynek, Sabuj Bhattacharyya and Debora Weber-Wulff for getting this out! #ResearchIntegrity #AcademicResearch #GenAI #ResearchEthics #HigherEd #ENAI

  • View profile for Katie Baca-Motes

    CEO & Co-Founder | GSD Health Research | Redefining Clinical Trials to Accelerate Breakthroughs in Women’s Health

    7,831 followers

    📈 📲 The rapid growth of wearable and app derived health data has outpaced our consent infrastructure. A new paper offers one of the clearest attempts to close that gap. A perspective from Stefanie Brückner, Stephen Gilbert, & colleagues, presents a thoughtful framework for responsible use of health app and wearable data in research. As funders and regulators expect stronger transparency and participant centered governance, models like this will be important for future approval pathways and for the long term sustainability of digital research. Many EU-based efforts related to electronic health records are moving toward opt out structures for secondary use. This may work for clinical data collected inside health systems but is not appropriate for data generated through wearables and consumer apps. #PGHD are created voluntarily, outside clinical care, and often on self purchased devices. For this category, the European Data Protection Board has argued that explicit and informed consent is necessary. The framework proposed here is designed for that need. The authors introduce a user driven consent platform that gives individuals a consistent way to decide how their data are shared across apps, clinical systems, and research. As patient generated data become central to public health, clinical trials, and population research globally, this work addresses a foundational gap. Key themes: 🔐 Granular and revocable consent Participants can specify which types of data can be used for personal care or research, update preferences at any time, and rely on pseudonymized identifiers. 📑 Alignment with governance structures Standardized, informed, and revocable consent supports the General Data Protection Regulation and the emerging European Health Data Space, and it provides the clarity global regulators seek in real world evidence. 🔗 Interoperability The platform uses HL7 FHIR and open identity standards, enabling integration with electronic health records and digital health services. This supports international research and ethical data sharing. 🤝 A stronger foundation for trust Transparent governance and clear communication are essential for long term engagement and for high quality datasets. Open Access Paper 🔗 https://www.nature.com/articles/s41746-025-02147-3 At GSD Health Research we are building large scale cohort studies that rely on participant generated data including wearable streams and patient reported outcomes. Our work depends on trust and clarity. This perspective illustrates how consent infrastructure can support ethical real world evidence and accelerate discovery in ways that respect the people who make research possible. Thank you to the full author team for a timely contribution. #digitalhealth #clinicalresearch #realworlddata #datagovernance #PGHD

  • View profile for Szymon Machajewski

    AI Strategy & Governance Leader in Higher Education | EDSAFE AI Alliance Council | Federal S.A.F.E. by Design Contributor | EDUCAUSE Horizon Report Panelist & CG Chair Student Success Analytics | Responsible AI Adoption

    3,669 followers

    🔬 New NSF-funded guidelines for responsible AI use in STEM education research are out (Smith & McGill, 2026) with a little help from our friends! iacomputinged.org/graiser Fellow UIC colleague 🎓 Jeremy Riel was also at the table, great to see University of Illinois Chicago institution represented in this work. Thanks Monica McGill, Dr. Julie M. Smith & Institute for Advancing Computing Education! The Guidelines for the Responsible Use of AI in STEM Education Research (National Science Foundation (NSF) Award No. 2519885) is one of the most actionable frameworks I've seen for researchers navigating AI right now. Five priorities stood out to me: 🧠 Human critique first. AI is assistive, researchers must have the expertise to evaluate what it produces, not just accept it. 🤔 Decide holistically. Ask whether AI is actually the best option for each task, not just the fastest. 🔒 Protect participant data. Re-identification risks are growing as AI gets more powerful. Rethink how you share datasets. 📋 Document AI use. Keep a record of what tools you used, when, and how — throughout the full research process. 📣 Disclose AI use. In presentations and publications. Every time. The modified CRediT statement approach in this report is worth adopting. These aren't abstract ethics principles — they're operational decisions that belong in your research workflow before you open ChatGPT. Full report is open access. Worth sharing with your research teams. https://lnkd.in/gS7vhc3V #AIinEducation #STEMEducationResearch #ResponsibleAI #HigherEducation #NSF #EdResearch Great to work with you Justin Reich, Lisa Bosman, PhD, Aman Yadav, Megan Stubbs-Richardson and others!

  • View profile for Joao Monteiro

    Chief Editor at Nature Medicine | Health and clinical sciences publishing strategy | ICMJE member | Championing innovative and inclusive solutions for the big problems in medicine

    19,843 followers

    Research ethics and integrity challenges during pandemics are not unique, but they are vastly magnified during crises. Last week I had the privilege to join the PREPARED EU team at the UNESCO headquarters in Paris, to officially launch the PREPARED Code. In our editorial in Nature Medicine, we discuss how the code presents a much-needed framework to ensure that research during pandemics is trustworthy and accessible to all. Preparedness — developing the systems and tools to enable an effective response at all levels ahead of the next pandemic — is key. With that mindset, the PREPARED initiative, an effort funded by the European Commission, UK Research and Innovation and the Swiss State Secretariat for Education, Research and Innovation, developed the PREPARED Code. The aim of this code is to provide an ethics framework to support researchers, research ethics committees and research integrity offices throughout a pandemic. The code provides guidance across all biomedical and sociological disciplines, and it is presented through concise statements in clear, jargon-free language. Critically, PREPARED is a values-driven code, centered on fairness, respect, care and honesty — all values that resonate universally. It complements the @TRUST Code – A Global Code of Conduct for Equitable Research Partnerships — which is endorsed by Nature Medicine and the Nature Portfolio journals, and it is accompanied by a range of free training materials. Congratulations to everyone in the PREPARED team, under the steady leadership of Prof. Doris Schroeder! #researchethics #researchintegrity #pandemicpreparadness #outbreaks #equitableresearch PREPARED EU | UNESCO | UCLan Cyprus | University of Central Lancashire | European & Developing Countries Clinical Trials Partnership (EDCTP) | Amsterdam UMC | Partners for Health and Development in Africa (PHDA) | Research and Information System for Developing Countries (RIS) | Finnish National Board on Research Integrity TENK | Trilateral Research | Seoul National University | Fudan University | ICLAIM Centre | University of the Witwatersrand | Vilniaus universitetas / Vilnius University | EUREC Office - European Network of Research Ethics Committees | Foundation Global Values Alliance

  • View profile for Tama Leaver

    Professor of Internet Studies at Curtin University

    3,783 followers

    The ASSOCIATION OF INTERNET RESEARCHERS (AoIR) has just released AoIR’s Risky Research Guide, crafted by a wonderful team led by Alice Marwick, which will be incredibly helpful to researchers planning on, or currently, navigating risky research areas. Our release notes: We are delighted to share the publication of Risky Research: An AoIR Guide to Researcher Protection and Safety 2025, the culmination of over two years of collaborative effort by the AoIR Risky Research Working Group. We designed this report to directly address the increasing personal, institutional, and political risks faced by researchers around the globe. This includes researchers from marginalized and minoritized communities and people working on controversial, sensitive, or politically charged topics—from disinformation and extremism to LGBTQ+ rights, platform governance, and climate change. Drawing from the collective expertise and lived experiences of more than 30 international contributors, the guide provides: * A framework for identifying and assessing risk in scholarly research * Practical strategies for mitigating harms at individual, institutional, and community levels * Guidance for designing projects with risk in mind * Concrete recommendations for universities, supervisors, and departments * A curated set of tools, policies, and best practices for responding to harassment, surveillance, doxxing, and more The guide is designed to support researchers at all stages of their careers, including students, contingent faculty, and those conducting research in politically restrictive contexts. It builds on and extends AoIR’s long-standing commitment to ethics, care, and collective responsibility within internet research. Download the guide here: https://lnkd.in/ghYhzdKP

Explore categories