I used Google Forms for my bachelor’s research. And now I realize I shouldn’t have. Not because I was careless, but because I didn’t know better. None of us did. In India, almost every psych or social work student I knew used Google Forms. It was free, easy, and accessible. We thought we were doing it right. But once I started my master’s in Germany, I noticed something strange: No one here uses Google Forms. Not even for tiny surveys. Why? Google stores form responses on servers mostly located in the U.S, meaning researchers outside the U.S have little control over where their participants’ data goes or how it’s protected. When you’re collecting personal or sensitive information, this lack of control becomes a serious ethical and sometimes legal concern. That hit me hard. Back then, people trusted me with their stories. And I unknowingly put that trust at risk. I’m not sharing this to blame anyone. I’m sharing it because we’re often not taught what ethical research actually looks like. So here’s what I wish someone had told me earlier: If you’re collecting data from people, especially in psychology or social work, privacy is not optional. There are a few alternatives available: 🔹 Zoho Survey: Free, Indian company, better data protection. 🔹 LimeSurvey: Open-source, widely used in academia. 🔹 Nextcloud Forms: Privacy-first, great if your institution supports it. 🔹SurveySparrow : Also based in India. Good if you're not collecting highly sensitive data. 🔹Jotform: If you want a form builder that feels like Google Forms but with more control. Just double check where the data is stored. And if you must use Google Forms: • Be transparent: Let the participant know where their data would be stored • Avoid collecting sensitive info • Download and delete data from the platform ASAP Research is not just about responses. It’s also about respecting the people who respond. If you’re a student reading this, I hope this helps you to take one step closer to doing research that’s not just smart, but safe.
User Research Participant Privacy Guidelines
Explore top LinkedIn content from expert professionals.
Summary
User research participant privacy guidelines are standards and practices that protect the personal information and data of people who take part in research studies. These guidelines help ensure participants’ rights, safety, and trust are respected throughout the research process.
- Communicate clearly: Always explain to participants how their data will be used, stored, and protected before they agree to take part in a study.
- Secure data responsibly: Choose data collection tools and storage methods that keep sensitive information safe and comply with privacy laws.
- Review for risks: Take extra care to prevent both direct and indirect sharing of personal details, and avoid collecting information that could harm participants if disclosed or inferred.
-
-
A user interview starts well before anyone starts talking. Earlier this week, a friend—customer of a big UK fintech—told me about a UX research session where she was the participant. Let's just say it wasn’t great. She logged into the video call to face 15 silent observers. Not exactly the setup for an honest chat about money. The weary researcher admitted it was his 12th call that day. He kicked off with generic questions she’d already answered on the screener: “What does your business do? How long have you been doing that?” Unsurprisingly, she didn't reveal any deep insights about herself on that call. I get that we’re all pushing for stakeholder buy-in, but don’t treat participants like lab rats. To me, every choice leading up to an interview shapes its success. I’m hands-on with every touchpoint. Here’s how I put that into practice: ✉️ The invite email Craft it yourself—it’s part of the conversation. Make it feel personal (no excuses with CRM/AI tools), introduce yourself, explain why they’re well-positioned to help, and recognise the value of their time. If it reads like a marketing template, expect a low conversion and quality of participant. 📋 The screener Ask a reasonable number of relevant questions (don’t be cheeky and turn it into a survey). DO NOT repeat the same questions in the interview—if you couldn’t be bothered to read their answers, what does that signal? Refer back, show interest, and make them feel like they’re not just one of 35 participants this week. ✅ The consent form/info pack This is a trust-building opportunity, not just a data protection formality. Use it to set expectations—who’ll be on the call, what you’ll be covering, and, if it’s a sensitive topic like money or health, let them know upfront. They should feel fully assured their data will be secure. 🖥️ Arrival on the call Imagine how it feels to dial in and see multiple faces already on screen, with recording or AI transcription already on. I insist on one observer max. If more people want to watch, gather them in a room or pay £30 for a broadcast tool (letting participants know, of course). At the very least, bring in observers after the participant’s had a warm welcome. I’ll stop here! In the past, we’ve written guides on opening a research call and making participants comfortable, you can find these over on Muir Wood & Co: Research and Strategy company page. I’d love to hear from other researchers—where do you go the extra mile to put participants at ease before the interview? And get in touch if you’d like feedback on your pre-interview process and materials. (Cartoon by Dall-E because my iPad battery was flat!)
-
𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 ✅ 𝗪𝗵𝘆 𝗔𝗿𝗲 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁? • Protect participants from harm • Maintain integrity of the research process • Build trust with the public and academic community • Ensure compliance with institutional and legal standards 🔑 𝗞𝗲𝘆 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 Here are the core principles you must address in your research: 𝟭. 𝗜𝗻𝗳𝗼𝗿𝗺𝗲𝗱 𝗖𝗼𝗻𝘀𝗲𝗻𝘁 𝗣𝗮𝗿𝘁𝗶𝗰𝗶𝗽𝗮𝗻𝘁𝘀 𝗺𝘂𝘀𝘁: • Be fully informed about the purpose, procedures, risks, and benefits of the research • Understand their participation is voluntary • Have the opportunity to withdraw at any time without consequences ✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “All participants were provided with an informed consent form outlining the study’s objectives, procedures, and their rights, including the right to withdraw at any point.” 𝟮. 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗔𝗻𝗼𝗻𝘆𝗺𝗶𝘁𝘆 • Ensure participant data is kept confidential • Remove identifying details where possible (anonymization) • Secure data storage (e.g., encrypted files, password-protected systems) ✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “Participants’ names and identifying information were excluded from all reports, and data were stored securely in encrypted files.” 𝟯. 𝗔𝘃𝗼𝗶𝗱𝗮𝗻𝗰𝗲 𝗼𝗳 𝗛𝗮𝗿𝗺 • Protect participants from physical, psychological, emotional, or social harm • Screen for potential risks before the study begins ̲✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “The study design minimized psychological discomfort by avoiding sensitive or triggering questions. Support resources were provided if distress occurred.” 𝟰. 𝗩𝗼𝗹𝘂𝗻𝘁𝗮𝗿𝘆 𝗣𝗮𝗿𝘁𝗶𝗰𝗶𝗽𝗮𝘁𝗶𝗼𝗻 • Participation should be completely voluntary • No coercion, pressure, or manipulation • Particularly important in vulnerable populations (e.g., children, prisoners) ✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “Participation was entirely voluntary, and no incentives were used that might pressure individuals to take part.” 𝟱. 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗔𝗽𝗽𝗿𝗼𝘃𝗮𝗹 • Obtain approval from a recognized Ethics Review Board (ERB) or Institutional Review Board (IRB) • Submit a detailed study protocol for review before data collection ̲✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “This research received ethical approval from the University Research Ethics Committee (Ref: 2025/101).” 𝟲. 𝗥𝗲𝘀𝗽𝗲𝗰𝘁 𝗳𝗼𝗿 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗹𝗲 𝗣𝗼𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻s If researching children, the elderly, refugees, etc., additional safeguards must be in place: • Consent from guardians • Simplified language • Ongoing monitoring of participant well-being 7. Honesty and Integrity • Report findings truthfully • Do not falsify or manipulate data • Acknowledge sources and avoid plagiarism ✅̲ ̲𝙴̲𝚡̲𝚊̲𝚖̲𝚙̲𝚕̲𝚎̲:̲ “All data were reported honestly, and no fabrication or manipulation was involved in the analysis.”
-
#Dissertation Season is Here—But Where are the Research Ethics? It’s that time of the year when my inbox (and probably yours too) is flooded with Google Forms for research participation. While I love seeing students and researchers actively collecting data, there's something crucial missing in many of these forms: #ResearchEthics. Before hitting ‘Send,’ make sure your Google Form includes: ✔ Informed Consent: For minors (below 18 years): Parental #consent + Child’s #assent (both are required). For adults (18+): A clear consent form detailing participation terms. ✔ Full Researcher Identification: Mention your name, affiliation, #designation, and supervisor’s details (if applicable). Provide a contact email for participant queries. ✔ #Debriefing Statement: Participants should know: Why are they filling this form? What will happen to their data? If deception is used, provide an explanation after participation. ✔ #Confidentiality & Data Protection: Anonymity vs. Confidentiality: Clearly state whether responses are anonymous or confidential. Ensure data storage is secure and inform participants about how long data will be retained. ✔ Right to #Withdraw: Participants should not feel obligated to complete the survey. Inform them that they can exit at any time without penalties. ✔ Fair #Compensation (if applicable): If incentives are offered, clarify eligibility, amount, and disbursement method. ✔ Avoid Leading or Loaded Questions: Ensure that your #survey design doesn’t influence responses or force participants into biased answers. Ethical research isn’t just about ticking boxes—it’s about integrity, respect, and responsibility towards your participants. Let’s make sure our research meets the standards it truly deserves.
-
Scrubbing PII won’t stop an LLM from inferring it. Good new paper shows a structural mismatch: most of us think privacy = “don’t input PII.” But LLMs can infer sensitive traits from context. Traditional PII scrubbers remove explicit mentions, they don’t block deduction. The authors call this inference-based privacy risk, distinct from memorized PII. This is a user study on implicit inference and human countermeasures. ▪️ Users are only slightly above chance at predicting when their text reveals PII. ▪️ When asked to rewrite text to block inference, success was only ~28% on average. We lack mental model of how to block inference of PII. ▪️ Some attributes (e.g., location, relationships) are easier for humans to anticipate and block, while others (like occupation) not so much. Methodology: 240 American adults wrote short, everyday texts (e.g., about work / daily life). For each text, researchers measured whether models could infer traits (e.g., age, location, relationship status, income/occupation). Participants then tried to rewrite their text to prevent inference. Human rewrites were compared to LLM rewrite and a common PII sanitization approach. Why this matters Mental-model gap -- users expect storage risk (don’t share your name), while the hazard is inference (seemingly harmless details add up). As long as that gap persists, trust will lag. Where it matters to users, products need to make inference visible (show what’s being inferred) and preventable. User strategies (if it matters) ▪️ Paraphrasing is mostly cosmetic, not effective. Inference still easy for models. ▪️ Abstraction/generalization much more effective (e.g., "a large city" vs "New York City"). Of course this can prevent value extraction (e.g., if you're travelling to NYC and want advice about the city, you need to be specific). ▪️ Omission/deletion: drop the detail entirely if it’s not essential. ▪️ Ambiguity: “a colleague” vs “VP” For builders designing for trust: ▪️ Inference cues in real time: flag likely trait inferences (“This sentence could reveal your job seniority”) with a “why” tooltip. ▪️ One-tap protective rewrites: offer autosuggestions that apply abstraction/omission/ambiguity, not just PII redaction. ▪️ Shift evaluation metrics: measure inference blocked, not just PII removed. ▪️ Policy + UX alignment: communicate clearly that privacy risk lives in combinations of details, not just explicit identifiers. ▪️ Privacy defaults: safe-by-default templating for common scenarios (support tickets, resumes, bios) where inference risk is high. Overall, if we keep teaching users to avoid typing PII, we’ll keep missing big risks. And that will snowball into trust problems. Teach and design for inference awareness. Paper: Beyond PII: How Users Attempt to Estimate and Mitigate Implicit LLM Inference. By Synthia Wang, Sai Teja Peddinti, Nina Taft, Nick Feamster. https://lnkd.in/eZ4eA7Rj #AI #Privacy #TrustworthyAI #AISecurity #HumanCenteredAI #ProductDesign #UX
-
New European Data Protection Board #Guidelines 1/2026 offer an important clarification for anyone handling personal data in #research. Key #takeaways: • Not everything called “research” qualifies as scientific research under the #GDPR. The #EDPB points to 6 indicators: method, ethics, transparency, independence, societal value, and contribution to knowledge. • Further processing for scientific research is presumed compatible with the original purpose. But compatibility does not remove the need to confirm a valid legal basis. • Storage can be longer for research purposes, but not indefinitely or vaguely. Future use must remain specific, foreseeable, and justified. • Broad consent is possible when purposes are not yet fully known, provided strong safeguards and ethical standards are in place. Dynamic consent remains a strong complementary model. • Scientific research can rely on different legal bases, including public interest and legitimate interest. Importantly, this is not limited to public bodies, and can also apply to private actors where conditions are met. • Special category data still requires extra care: a valid Article 9 route, suitable safeguards, and attention to national rules for health, genetic, and biometric data. • #Transparency is not a one-off exercise. Long-running research requires ongoing updates when processing changes. • Data subject rights still matter. Exceptions to erasure or objections are possible, but only under strict conditions and with appropriate safeguards. • The core message is clear: research governance, accountability, and safeguards are not side issues, they are what make responsible data-driven research possible. A useful step forward for research institutions, health actors, universities, and #AI teams working with personal data in Europe. https://lnkd.in/d-DTgiTc
-
Using AI in Research? Transparency Isn’t Optional. As more researchers integrate AI tools for transcription, coding, or analysis, we’re also seeing a rise in participant concerns — and, increasingly, refusals — based on misconceptions about what AI actually does with their data. And honestly? Those concerns are valid. AI introduces new questions about privacy, data flow, and security. Participants deserve clarity, not jargon. Here’s the approach I’ve been championing, grounded in the STRESS Framework™ (Sensitivity, Transparency, Responsibility, Ethics, Skepticism, Security): 🔍 Be transparent: Tell participants when AI is used, what it does and doesn’t do, and how long data is stored. 🛡️ Prioritize security: Use vetted tools, encryption, and clear deletion timelines. 🧭 Stay ethical: Participation should always be voluntary — misconceptions are an opportunity to clarify, not persuade. 🤝 Build trust: Explain that AI assists with tasks like transcription, but human researchers still verify and interpret everything. 📄 Document responsibly: Keep clear records of how AI is used, how decisions are made, and how risks are mitigated. When participants understand the process, they’re more empowered — and our research becomes more ethical, transparent, and trustworthy. If you're looking to strengthen your own AI-use statements, consent materials, or research protocols, the STRESS Framework Assistant is an excellent tool to help you structure responsible AI documentation: 👉 https://lnkd.in/esFZEx34
-
The European Data Protection Board has released its Guidelines 1/2026 on processing of personal data for scientific research purposes. The document clarifies how the GDPR applies when personal data is used for research and provides practical direction for controllers, processors, and supervisory authorities. The guidelines explain that only genuinely scientific research benefits from the special GDPR provisions. Six key indicative factors help determine this. These include a methodical and systematic approach, adherence to ethical standards, verifiability and transparency, autonomy and independence, objectives aimed at society's knowledge and wellbeing, and potential to contribute to scientific knowledge. Further processing for scientific research is presumed compatible with the initial purpose. Controllers do not need to run a separate compatibility test but must still check lawfulness. Personal data can be stored for longer periods if needed for future research, provided appropriate safeguards are in place. On consent, the guidelines permit broad consent covering a specific research area when purposes are not yet fully known. Dynamic consent is another option, where subjects consent to each project separately. Both approaches can be combined. Public interest and legitimate interest are important legal bases. Both public and private entities can rely on them. Scientific research carries significant weight in the balancing test because it benefits society. For special categories of data, controllers can rely on explicit consent, data manifestly made public, or derogations under Union or Member State law. Extra caution is needed for genetic, biometric, and health data. Transparency must be maintained throughout long research periods. Controllers should update subjects when processing changes. The right to erasure has a specific exception for scientific research when erasure would impair research objectives. The right to object can be rejected when processing is needed for public interest tasks. When multiple entities are involved, responsibility must be clearly allocated. This matters for research protocols and public-private partnerships. Determining controller, joint controller, and processor roles ensures accountability. Appropriate safeguards are mandatory. Anonymisation should be used where possible, pseudonymisation where not. Direct identification is allowed only when strictly necessary and proportionate. Other safeguards include ethical oversight, secure processing environments, privacy enhancing technologies, confidentiality arrangements, and conditions for further use. A copy of the document is enclosed. #DataProtection #GDPR #ScientificResearch #EDPB #Privacy #DataGovernance #ResearchEthics #HealthData #ClinicalTrials #DataPrivacy #ComplianceMatters #EUlaw #PrivacyByDesign #Pseudonymisation #InformedConsent P.S. This is for academic discussion only.
-
Ethical UX: Key Principles for the User Research Phase In the rush to innovate, are we truly respecting our users? User Research is the foundation of great UX design, but it’s also where ethical lines can be blurred—misleading participants, misusing data, or conducting biased research can lead to harmful consequences. Ethical UX isn’t just about following rules—it’s about building trust, transparency, and responsibility into the research process. When we engage with users, we must ensure that our methods are fair, inclusive, and respectful. Key Ethical Principles in User Research ✅ Informed Consent – Always explain the purpose of research, how the data will be used, and get clear consent before proceeding. Users should never feel pressured to participate. ✅ Privacy & Confidentiality – Protect user data at all costs. Anonymize responses when necessary and ensure data is stored securely. Never collect more data than needed. ✅ Avoid Bias – Frame questions neutrally to prevent leading participants toward a specific answer. Ensure diverse representation to avoid skewed insights. ✅ Respect Participants’ Time – Keep research sessions efficient and valuable. Avoid overburdening users with lengthy interviews or irrelevant questions. ✅ Diversity & Inclusivity – Ensure research includes people from various backgrounds, abilities, and perspectives. Excluding voices leads to exclusionary designs. ✅ Transparency & Honesty – Be upfront about the research objectives. Never manipulate data or mislead participants to fit business goals. ✅ Minimize Harm – Psychological safety matters. Avoid sensitive topics or scenarios that could cause distress or discomfort to users. ✅ The Importance of NDAs in Research – A Non-Disclosure Agreement (NDA) is crucial, whether you’re a freelancer, working with startups, or part of a large enterprise. It ensures that sensitive user insights, proprietary information, and project details are kept confidential. Without an NDA, businesses risk data breaches, trust violations, and potential misuse of research findings. 💡 Why does this matter? Ethical UX is not just a practice—it’s a commitment to designing for people, not against them. When we integrate ethical principles into our research, we build products that empower users rather than exploit them. Join the Movement for Ethical UX! At WorldUXForum, we champion Ethical UX—designing with integrity and respecting user choices. 🚀 Join us in building a transparent, ethical digital future. Let’s create experiences that truly serve users. #UX #UXIndia #India #Design #Ethics #EthicalUX #UserResearch #UXEthics #WorldUXForum #UserExperience #HumanCenteredDesign #NDA #DataPrivacy
-
Product Discovery posts always forget to mention one thing that can actually get a company sued. *Anyone who talks to users* is responsible for protecting their data. It’s a bigger topic among UX Researchers because ethics matter. And we don’t want to be yelled at by Legal. But anyone talking to and recording customers has to handle customer data the right way. Before running discovery work, you should actually *know how to handle customer data*. Here’s how to protect user information in your Product Discovery work (and stay friends with your legal team): 1️⃣ Always ask for consent. Tell participants what the research is for, that you’d like to record, and how you will use their data and recordings. 2️⃣ Respect customers' consent choices. If they don't want to be recorded, then please...don't record. 3️⃣ Hide customers’ contact details if you invite people from multiple teams to observe live sessions. Instead of inviting your customer and team members to the same calendar event, send 2 different invites: 1 event between you and the participant, a second between you and observers. 4️⃣ Anonymize participants in documentation. Use first names only and no contact details in any documents that the full org can access. Keep a single file with personally identifiable participant information (PII) and password protect it (ex: email addresses). Limit access to the people who really need access. Keep these details in one place so it’s easy to delete if a customer participant asks you to. 5️⃣ Have a quick chat with any legal team member on staff. Check if you need to take extra steps beyond this list based on your team's industry requirements. -- LinkedIn feeds can be pretty cluttered. If you found this helpful, consider reposting ♻️ to help your followers find something they can actually use. #startups #ux #productmanagement #productdiscovery #uxresearch
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development