🌻 Designing For Trust and Confidence in AI (Google Doc) (https://smashed.by/trust), a free 1.5h-deep dive into how trust emerges, how to design for autonomy, risk, confidence, guardrails — with all videos, slides and examples in one single place. Share with your friends and colleagues — no strings attached! ♻️ Google Doc (slides, videos, links): https://smashed.by/trust All slides (PDF): https://lnkd.in/dsq2BAJJ Full 1.5h-video recording: https://lnkd.in/d72b66Qa Zoom video backup: https://lnkd.in/dZJzCnZh Key takeaways: 1. Trust doesn’t emerge by default — it must be earned. 2. Trust means strong believing, despite uncertainty. 3. It’s when system is competent, predictable, aligned. 4. It also means transparency about its limitations / capabilities. 5. AI feature retention often plummets due to lack of confidence. 6. Trust isn’t linear: takes time to be built, drops rapidly in failures. 7. Most products don’t want users to fully rely on them → complacency. 8. Trust requires Understanding + Success moments + Habit-Building. 9. It thrives at intersection of Perceived value + Low cognitive effort. 10. We need to “calibrate” trust to avoid over-reliance and aversion. 11. Transparency only builds trust if users can verify the output. 12. User must feel in control: to validate, shape and override output. 13. Users have low tolerance for mistakes if AI acts on their behalf. 14. High-autonomy + High-risk → human intervention is non-negotiable. 15. Start with human oversight, increase autonomy as trust grows. 16. Perceived usefulness + ease of use are primary drivers of AI adoption. 17. Biggest risk to effort is a blank page → leads to open-intent paralysis. 18. Confidence builds through frequent use, not through “blind” trust. 19. Confidence scores are insufficient to help people make a decision. 20. AI might absorb cognition, but humans inherit the responsibility. Design patterns: 1. Link to specific fragments, not general sources. 2. Show the distribution of opinions, not a final answer. 3. Use structured presets to help articulate complex intents. 4. Rely on buttons/filters for a precise control or tweaking. 5. Show sandbox previews to help understand outcomes. 6. For high-stakes scenarios, design approval steps and flows. 7. Explicitly label the assumptions made during processing. 8. Replace confidence scores with actions, requests for review. 9. Embed AI features into existing workflows where work happens. 10. Proactively ask for context around the task a user wants to do. 11. Reduce effort for articulation with prompt builders/tasks. Recorded by yours truly with the wonderful UX community last week. And a huge *thank you* to everybody sharing their work and their findings and insights for all of us to use. 🙏🏼 🙏🏾 🙏🏾 ↓
User Experience and Data Privacy
Explore top LinkedIn content from expert professionals.
-
-
𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics
-
I used Google Forms for my bachelor’s research. And now I realize I shouldn’t have. Not because I was careless, but because I didn’t know better. None of us did. In India, almost every psych or social work student I knew used Google Forms. It was free, easy, and accessible. We thought we were doing it right. But once I started my master’s in Germany, I noticed something strange: No one here uses Google Forms. Not even for tiny surveys. Why? Google stores form responses on servers mostly located in the U.S, meaning researchers outside the U.S have little control over where their participants’ data goes or how it’s protected. When you’re collecting personal or sensitive information, this lack of control becomes a serious ethical and sometimes legal concern. That hit me hard. Back then, people trusted me with their stories. And I unknowingly put that trust at risk. I’m not sharing this to blame anyone. I’m sharing it because we’re often not taught what ethical research actually looks like. So here’s what I wish someone had told me earlier: If you’re collecting data from people, especially in psychology or social work, privacy is not optional. There are a few alternatives available: 🔹 Zoho Survey: Free, Indian company, better data protection. 🔹 LimeSurvey: Open-source, widely used in academia. 🔹 Nextcloud Forms: Privacy-first, great if your institution supports it. 🔹SurveySparrow : Also based in India. Good if you're not collecting highly sensitive data. 🔹Jotform: If you want a form builder that feels like Google Forms but with more control. Just double check where the data is stored. And if you must use Google Forms: • Be transparent: Let the participant know where their data would be stored • Avoid collecting sensitive info • Download and delete data from the platform ASAP Research is not just about responses. It’s also about respecting the people who respond. If you’re a student reading this, I hope this helps you to take one step closer to doing research that’s not just smart, but safe.
-
League of Legends went down 👩🏻💻👀 not because of a breach, but because a security certificate expired. Here’s why that matters more than you think 🎮⏳ Not all security incidents involve hackers breaking in. Some happen because something quietly expires. This week, League of Legends experienced an outage after a security certificate expired, preventing players from connecting. No malware. No exploit. Just a missed renewal, and suddenly a global platform was offline. This is a powerful reminder: 𝐚𝐯𝐚𝐢𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐢𝐬 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲. Why it matters 🔍⚠️ Security certificates are foundational trust mechanisms. When they expire, systems don’t just become “less secure”, they can stop working entirely. Yet certificate management is still treated as a background task, often manual, fragmented, or poorly monitored. At scale, something as simple as an expired cert can: • Break authentication flows • Block secure connections • Cause widespread outages • Erode user trust instantly How you can use this 🧠🛡️ If you’re learning or working in cybersecurity, remember: Defenders don’t only protect against attackers, they protect against operational blind spots. In your labs or real environments, ask yourself: 👉 Where are we relying on “set it and forget it” security controls? 👉 What would fail if a cert, key, or token silently expired tomorrow? 👉 Do we detect expiry before users feel it? Many real-world incidents aren’t flashy breaches. They’re quiet oversights with loud consequences. Security maturity isn’t just about stopping attacks, it’s about maintaining trust over time. Would love to hear your thoughts in the comments 👇 🌟 Repost to share with your network 🌟 This is part of my ongoing Cyber News Bytes Series, where I break down real-world security stories and the lessons behind them. 💡 Subscribe to my newsletter for weekly cybersecurity news & insights straight to your inbox: https://lnkd.in/e2NaVZZj
-
Trust is the real bottleneck to AI impact, not GPUs or models. I went through the SAS Data and AI Impact Report. It is one of the clearest looks at what actually drives outcomes in the enterprise. Here is the short version. You can also find the complete report here – https://lnkd.in/d7XfVKNM What the report highlights • Generative AI usage is up, and agentic AI is rising, but traditional ML still underpins real production work. • Most teams say they “trust” AI, yet many lack the governance, explainability, and monitoring needed to prove it. That gap lowers ROI. • ROI improves when goals are value focused. Customer experience, growth, resilience, and time to value outperform pure cost cutting. • The biggest blockers are weak data foundations, inconsistent governance, and skills gaps. • Maturity varies by industry, but leaders share the same pattern. Centralized data, accountable governance, and an end to end AI lifecycle. Why this helps enterprises • It gives a benchmark. Use trust and impact indices to see where you stand and where to invest next. • It links trust to hard results. Governance is not a checkbox. It is how you improve returns and reduce surprises. • It focuses on foundations. Good data, clear policy, and lifecycle oversight beat ad hoc pilots. My take • Move from “save cost” to “create value.” Prioritize customer experience, decision speed, and new revenue paths. • Treat trust like an operating system. Build a reusable layer for governance, explainability, bias testing, evaluation, and monitoring. Use it across all use cases. • Prepare for agentic AI with data work first. Consolidate data, define permissions, and track lineage. Agents will only be as good as the operating environment you give them. • Invest in skills. Teach builders evaluation and safety. Teach business teams how to measure decision quality. • Start small, measure fast, scale what works. Make ROI reviews a habit, not a milestone. Why this matters now AI has moved from pilots to core workflows. If trust lags, risk scales faster than value. If trust leads, value compounds. This report offers a practical map for leaders to shift from enthusiasm to impact. If you lead data or AI in your company, block time with your team this week. Align on foundations, governance, and near term value. Then execute. #data #ai #agenticai #sas #theravitshow
-
The most misused discovery question on the planet: "How does that impact you personally?" A best? Buyers roll their eyes. At worst? They lose trust and never call you back. 4 questions to use instead: 1. "How is that showing up in the business?" After you explore a challenge with a buyer, try asking this. Usually, they answer with metrics and other issues that have financial impact. A good thing. 2. "Who else is impacted by that, and how?" Asking how YOU are impacted (personally) feels like "too much." But when you ask how others are impacted, it's less intrusive. You're simply spot-checking the consequences across the business. Plus, this sets you up for multi-threading. 3. "What are some of the ripple effects this challenge is having on the business?" This is similar to "what's the impact?" But the varied wording is different. It's not what buyers expect. It feels more sophisticated. The intent comes across as analyzing the health of the business, rather than manipulating the individual. 4. "I've found that most challenges like the one you're sharing with me create OTHER challenges somewhere else in the business. Do you see that happening here?" Again. The intent comes across better. A "trusted advisor" asks questions like this. Because it feels like you're helping them analyze their business. Not extract personal pain you can lord over them later. Takeaway: Asking "How does this impact you personally?" can have its place. But it's not for a first call. You can only get away with asking that after you've built trust. If you've done that, then fire away. Until then, try these four alternative questions first.
-
How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.
-
A hairdresser and a marketer came into the bar. Hold on… Haircuts and marketing? 🤔 Here's the reality: Consumers are more aware than ever of how their data is used. User privacy is no longer a checkbox – It is a trust-building cornerstone for any online business. 88% of consumers say they won’t share personal information unless they trust a brand. Think about it: Every time a user visits your website, they’re making an active choice to trust you or not. They want to feel heard and respected. If you're not prioritizing their privacy preferences, you're risking their data AND loyalty. We’ve all been there – Asked for a quick trim and got VERY short hair instead. Using consumers’ data without consent is just like cutting the hair you shouldn’t cut. That horrible bad haircut ruined our mood for weeks. And a poor data privacy experience can drive customers straight to your competitors, leaving your shopping carts empty. How do you avoid this pitfall? - Listen to your users. Use consent and preference management tools such as Usercentrics to allow customers full control of their data. - Be transparent. Clearly communicate how you use their information and respect their choices. - Build trust: When users feel secure about their data, they’re more likely to engage with your brand. Make sure your website isn’t alienating users with poor data practices. Start by evaluating your current approach to data privacy by scanning your website for trackers. Remember, respecting consumer choices isn’t just an ethical practice. It’s essential for long-term success in e-commerce. Focus on creating a digital environment where consumers feel valued and secure. Trust me, it will pay off! 💰
-
The TikTok privacy debate did not end with the US agreement. It has escalated. TikTok has updated its US Privacy Policy. It is now one of the most aggressive data collection regimes of any mainstream consumer platform. It explicitly acknowledges the collection and processing of sensitive personal information under US state privacy laws. Named directly: • Racial or ethnic origin. • Religious or philosophical beliefs. • Mental and physical health data. • Sexual orientation. • Transgender or nonbinary status. • Citizenship or immigration status. • Precise location data. The policy goes further. TikTok is collecting far more than what users consciously share. Under the updated policy, it gathers what you provide, what it observes automatically, and what it receives from third parties. That includes account details and identity verification documents, private messages, drafts and unpublished content, AI prompts and interactions, clipboard content, purchase and payment data, contact lists and social graphs, and an extensive set of technical signals such as device identifiers, keystroke patterns, battery state, audio configurations, and activity tracked across devices. This is not incidental data leakage. It is formalized, permitted, and documented. Images and video are treated as analyzable environments. TikTok states that it "identifies objects and scenery, detects faces and other body parts, extracts spoken words, and collects metadata describing how, when, where, and by whom content was created." Post a photo near the Golden Gate Bridge and you are not just sharing a moment. You are generating structured data about place, time, environment, and your body, or body parts. Photos and videos are not just content. They are raw material for computer vision, biometric analysis, and location inference. Tik Tok will use all of the collected data, and maintains the right to sell all of it to interested third parties, from vendors to the federal government. Leaders must act on this immdiately. Privacy policies are not background reading. They are power documents. When they change, accountability shifts with them. If you are a user, a parent, a school, a youth facing organization, nonprofits, and public institutions that use TikTok as a communications channel, the update changes the governance calculus. Engagement is not a neutral act. It carries serious legal and ethical obligations tied to data protection, duty of care, and institutional risk. The new policy deserves close reading. At this stage of platform power, and scale of data collection, policy literacy is a governance responsibility, not a personal preference. Read the policy here: https://lnkd.in/ejbm8THx
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development