Last night I was doing some diligence on a company that recently filed for bankruptcy—just browsing their website to get a sense of the assets. No forms. No sign-up. Just a passive visit. Minutes later, I get an email. Not to my personal Gmail. Not to anything I’ve ever typed into their site. This was an industry email account linked to research work—a shared inbox that forwards to multiple people. No one gave them permission. Yet there it was. How? Most likely: a mix of analytics platforms (Klaviyo, GA4, Meta, etc.) matching my browser session via cookie syncing, hashed email cross-referencing, or session fingerprinting—all made possible because I was logged into a Google session and browsing with Chrome. I didn’t opt in. But the systems behind the scenes decided I had. Now imagine if that hadn’t been a jewelry company, but something more sensitive—financial, legal, medical, porn even. No action, just curiosity. And suddenly your info is in their CRM. Maybe even forwarded to others. This isn’t some theoretical privacy debate. It happening last night. And while this might be technically allowed under U.S. law, it’s exactly the kind of behavior that gave rise to GDPR in Europe. If we keep going down this road—automating outreach based on passive signals, without consent—we’re going to see real backlash. Regulatory, reputational, and otherwise. As retailers, we live in customer data. It’s powerful. But there’s a line between using data to serve customers and using it to trap them. The first builds trust. The second breaks it. Just because you can doesn’t mean you should. That’s true in life, and it’s especially true in marketing.
Why passive data mining erodes trust
Explore top LinkedIn content from expert professionals.
Summary
Passive data mining refers to collecting information about people without their knowledge or consent, often in the background as they use websites or apps. This practice can erode trust because users feel their privacy is violated and worry about how their data is being used.
- Promote transparency: Inform users clearly about what data is being collected and how it will be used to prevent misunderstandings.
- Prioritize consent: Always ask for explicit permission before gathering or using personal data, giving people control over their information.
- Build accountability: Establish visible and enforceable policies that show users their privacy concerns are taken seriously and their data is protected.
-
-
𝐖𝐡𝐞𝐧 𝐙𝐞𝐫𝐨 𝐓𝐫𝐮𝐬𝐭 𝐌𝐞𝐞𝐭𝐬 𝐙𝐞𝐫𝐨 𝐂𝐨𝐧𝐬𝐞𝐧𝐭: Zscaler’s CEO recently stated that the company leverages its massive volumes of customer logs including structured and unstructured data such as full URLs to train AI models. 𝐙𝐬𝐜𝐚𝐥𝐞𝐫’𝐬 𝐛𝐫𝐚𝐧𝐝 𝐩𝐫𝐨𝐦𝐢𝐬𝐞 = “𝐙𝐞𝐫𝐨 𝐓𝐫𝐮𝐬𝐭”. Using customer logs for AI training without ironclad, customer-controlled boundaries contradicts the trust model. From a cybersecurity professional’s perspective, this raises serious concerns. Customer logs are not just “metadata.” They often contain sensitive footprints of an organization’s activities: internal applications, credentials embedded in query strings, healthcare portals, financial systems, even project names that reveal strategy. Treating these as raw material for AI training blurs the line between securing data and exploiting it. There are three fundamental problems here: 1. 𝐏𝐮𝐫𝐩𝐨𝐬𝐞 𝐋𝐢𝐦𝐢𝐭𝐚𝐭𝐢𝐨𝐧 – Customers adopt platforms like Zscaler for protection, not to become a dataset for someone else’s AI. Without explicit consent, this risks crossing into regulatory non-compliance under GDPR, HIPAA, PCI DSS, and beyond. 2. 𝐃𝐚𝐭𝐚 𝐋𝐞𝐚𝐤𝐚𝐠𝐞 𝐑𝐢𝐬𝐤 – Modern AI models are vulnerable to inversion and extraction attacks. Training on sensitive logs could inadvertently make critical information retrievable through the very models designed to help secure enterprises. 3. 𝐓𝐫𝐮𝐬𝐭 𝐂𝐨𝐧𝐭𝐫𝐚𝐝𝐢𝐜𝐭𝐢𝐨𝐧 – “Zero Trust” as a philosophy is about minimizing exposure and enforcing least privilege. Mining customer logs for AI purposes runs counter to that principle. AI has enormous potential to strengthen security, but how we train these models matters. Ethical guardrails, provable anonymization, customer control, and transparency are non-negotiable. 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐯𝐞𝐧𝐝𝐨𝐫𝐬 𝐦𝐮𝐬𝐭 𝐚𝐬𝐤 𝐭𝐡𝐞𝐦𝐬𝐞𝐥𝐯𝐞𝐬: 𝐀𝐫𝐞 𝐰𝐞 𝐞𝐦𝐩𝐨𝐰𝐞𝐫𝐢𝐧𝐠 𝐨𝐮𝐫 𝐜𝐮𝐬𝐭𝐨𝐦𝐞𝐫𝐬, 𝐨𝐫 𝐚𝐫𝐞 𝐰𝐞 𝐪𝐮𝐢𝐞𝐭𝐥𝐲 𝐞𝐫𝐨𝐝𝐢𝐧𝐠 𝐭𝐡𝐞 𝐭𝐫𝐮𝐬𝐭 𝐭𝐡𝐞𝐲’𝐯𝐞 𝐩𝐥𝐚𝐜𝐞𝐝 𝐢𝐧 𝐮𝐬? #ZeroTrust #AI #ZScaler #CyberSecurity #AISecurity
-
Walking through New York, it’s impossible not to notice the cameras. They're on corners, in the subway, outside apartment buildings, and seemingly everywhere. Most of us assume they’re there to keep an eye out for trouble and then, because we're not causing trouble, we forget about them. What we don’t see is just how much else is being captured, stored, and analyzed in the background. The #NYPD is compiling vast databases that go far beyond street cameras. Social media posts, license plate readers, cellphones seized during stops, even the online activity of our teenage children who have never committed a crime are being swept into a system that tracks and predicts(!) where people go and who they know. Once a young person is flagged, it can mean near constant monitoring, repeated questioning by authorities, and potentially lasting consequences that have little to do with actual public safety. In Europe, lawmakers have moved to limit this kind of #data collection, recognizing that privacy and transparency are essential to democratic life. In the United States, we are moving in the opposite direction. #Surveillance is quickly becoming the default. When public institutions lean too heavily on secret surveillance, they can risk eroding the kind of trust communities depend on for real safety. https://lnkd.in/eyaew6Rf
-
People turn to AI tools with some of their most personal questions. Financial decisions, health concerns, workplace problems. The reasonable assumption is that those conversations stay private. A new lawsuit against Perplexity AI suggests that assumption deserves much closer scrutiny. 🔍 The claim is serious. A lawsuit filed in US federal court alleges that Perplexity embedded tracking technology that operates in the background, quietly routing user activity data to Meta and Google whenever someone opens the app. The lawsuit goes further, alleging the same tracking continues even in incognito mode. Perplexity has said it has not been served any lawsuit matching that description and cannot verify the claims. The companies named have denied wrongdoing. But the allegations alone raise questions that the AI industry cannot afford to dismiss. 👀 This is not an isolated incident for Perplexity. The company has faced a string of legal disputes over the past year, including claims over the use of media content without authorisation and Reddit user data for AI training. A pattern of legal challenges around data practices is not the same as proven misconduct. But it does signal something about where accountability gaps exist and where scrutiny is overdue. ⚖️ The broader issue is trust, and how quickly it erodes. Users are not reading the fine print on data agreements before asking an AI tool for help navigating a difficult situation. They are operating on a reasonable expectation of privacy that the industry has so far been left largely to define for itself. That is precisely where governance frameworks need to be explicit, enforceable, and visible to the people most affected. 💡
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development