Online Privacy Tools

Explore top LinkedIn content from expert professionals.

  • View profile for Barbara C.

    Board & C-suite advisor | AI strategy, growth, transformation | Cloud, IoT, SaaS | Former CMO & MD | Ex-AWS, Orange

    15,099 followers

    AI reaches a milestone: privacy by design at scale Google AI and DeepMind have announced VaultGemma, a 1B parameter, open-weight model trained entirely with differential privacy (DP). Why does this matter? Most large LLMs carry inherent privacy risks: they can memorise and reproduce fragments of their training data. A serious issue if it’s a patient record, bank detail, or private correspondence. VaultGemma's training method - DP-SGD, which limits how much influence any datapoint has and adds noise to blur details - ensures no single personal data included in the training could later be exposed. The result: a mathematical guarantee of privacy, the strongest ever achieved at this scale. The opportunities In healthcare, finance, and government, the implications are immediate: 🔸 Hospitals can analyse patient data without risking disclosure. 🔸 Banks can detect fraud or assess credit risk within GDPR rules. 🔸 Governments can train models on citizen data while meeting privacy-by-design requirements. In each case, sensitive data shifts from a liability to an asset that can drive innovation. The challenges 1️⃣ Performance: VaultGemma is less accurate than the frontier LLMs, closer to the performance of GPT-3.5. This is the cost of stronger privacy: trading short-term capability for long-term protection. 2️⃣ Jurisdiction: The model guarantees privacy, but not sovereignty. Built by an American provider, it remains subject to U.S. law. Under the CLOUD Act, American authorities can compel access even to data hosted abroad. How this compares 💠 Gemini has strong capability and multimodality, but privacy protections rest on corporate policy. 💠 ChatGPT-5 leads in performance, but is closed & under U.S. jurisdiction. 💠 Claude is positioned as “safety-first,” yet its privacy controls are policy-based, not mathematical. By contrast, VaultGemma offers provable privacy. The trade-off is weaker performance and continued U.S. jurisdiction - but it moves the conversation from “trust us” to “prove it.” Leaders have now a wider choice for adopting AI: ✔️ Privacy-first model: trade accuracy for provable privacy. Suited for highly regulated sectors and SMEs needing compliance. Lower cost, limited customisation, under U.S. law. ✔️ Frontier LLMs: cutting-edge capability at scale. Privacy rests on policy, with jurisdiction split - U.S., Chinese, or EU law. Highest-priced via usage-based APIs, but with the broadest ecosystems and integrations. ✔️ Sovereign alternatives: slower today, but with greater control of data and law. Could adopt privacy-by-design methods like VaultGemma, though requiring heavy upfront investment. Higher initial cost, offset by customisation and long-term resilience. AI has reached a milestone: privacy by design is possible at scale. Leaders need to balance trust, compliance, performance, and control in their choices. #AI #ResponsibleAI #DataPrivacy #DigitalSovereignty #Boardroom

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    52,731 followers

    Apple CPU Flaw May Let Hackers Steal Your Data: 8 Ways To Stay Safe Security researchers have uncovered vulnerabilities in modern Apple CPUs that could let hackers extract sensitive information directly from your web browser. These attacks, known as FLOP and SLAP, exploit Apple's speculative execution—a feature designed to speed up processing—causing the CPU to reveal confidential data before correcting itself. This means that just by opening the wrong website, your Gmail inbox, Amazon order history, Google Maps location, or even your iCloud calendar events could be exposed to cybercriminals. Even worse, these attacks can happen remotely without requiring any downloads, malware, or physical access to your device. 1. Consider Disabling JavaScript For Untrusted Websites The FLOP and SLAP attacks rely on JavaScript running in your web browser. Temporarily disabling JavaScript in Safari or Chrome can help mitigate the risk. However, be aware that many websites rely on JavaScript for functionality, so this might impact your browsing experience. In Safari: Open Settings > Safari > Advanced, then disable JavaScript. (Note: This may break some website functionality.) In Chrome: Use extensions like NoScript or uBlock Origin to selectively block JavaScript on untrusted sites 2. Keep Your Browser And Operating System Updated Make sure you: Regularly update macOS and iOS by enabling automatic updates. Keep Safari and Chrome updated to their latest versions, as browser vendors may introduce mitigations before Apple releases a CPU-level fix. 3. Use A Privacy-Focused Browser Browsers like Brave, DuckDuckGo, and Firefox focus on privacy and security, providing additional layers of protection against tracking and browser-based attacks. 4. Enable Strict Privacy And Security Settings Enhance your browser security by: Blocking third-party cookies. Using private browsing mode to limit data exposure. Enabling enhanced tracking protection (available in Firefox and Brave). Please see article for additional suggestions: https://lnkd.in/gx_AMHt4 #cybersecurity #Apple #FLOP #SLAP

  • View profile for Odia Kagan

    CDPO, CIPP/E/US, CIPM, FIP, GDPRP, PLS, Partner, Chair of Data Privacy Compliance and International Privacy at Fox Rothschild LLP

    24,664 followers

    Regulators are coming after your tracking pixels. In the US, we are currently handling numerous pixel lawsuits and working with clients on compliance with both wiretapping, State laws and HIPAA in connection with pixel deployment. Now, Tobias Judin 🏳️🌈 and Datatilsynet in Norway, are going after these with investigation uncovering that websites often share sensitive information through the pixels unknowingly. 6 points that apply in the US as well: 🔹 Identify which tracking pixels, cookies, and other tracking tools your service uses; especially ones that use the info for their own purpose (this could be a "sale" or completely prohibited in the US if sensitive) 🔹 Browsing data can be sensitive. Consider the types of people who use your service and what inferences can be drawn about them, directly or indirectly, based on their browsing history. 🔹 Trackers on websites that target children as especially difficult because they require parental consent for deployment. In the US this has been enforced under COPPA 🔹 You need to give people a choice about the trackers. In the EU - this is pure consent; in the US this can be an opt out unless the data is sensitive. 🔹 You must provide accurate and understandable information about what the tracking tools do, and how they affect the individual and their privacy, as publicly as possible. This should be just-in-time but also in your privacy disclosures. 🔹 You are responsible for the trackers on your website, even if your particular use of them is innocent. You will generally be the one facing enforcement. https://lnkd.in/ef83G5XR pic by ChatGPT

  • View profile for Rajeev Mamidanna Patro

    Fixing what Tech founders miss out - Brand Strategy, Market Positioning & Unified Messaging | Build your foundation in 90 days

    7,736 followers

    The browser is often the easiest entry point for cyber threats. Here's the best thing to help secure your browser. Remote Browser Isolation (RBI). WHY: RBI addresses vulnerabilities of traditional browser usage like: → Zero-hour phishing → Malware → Credential theft → Session hijacking → Browser extension vulnerabilities HOW: → RBI separates users’ browsing activities from your network → Sessions are contained in a secure, remote environment → Web content is converted to safe code before it reaches the user → Only sanitized content is displayed, blocking potential threats → All this happens through your existing browser How CISOs should IMPLEMENT RBI: → Do an as-is browser posture assessment → Do a pilot of RBI with key departments → Integrate RBI with proxies, firewalls, & security policies → Use browsing forensics to enhance your RBI strategy → Roll it out organization-wide once you're sure Are you securing your browser enough? If not, my team can guide you through a tailored approach with Menlo Security Inc. RBI solution. P.S. How many times have you sensed that things are wrong while browsing / downloading? Share your experience in the comments! ---- Hi! I’m Rajeev Mamidanna. I help CISOs strengthen their Cybersecurity Strategies + Build Authority on LinkedIn.

  • View profile for Eugene Kaspersky

    CEO at Kaspersky

    45,689 followers

    Location-broker data leak & the ballad of privacy So, a company called Gravy Analytics – a location-data broker – was hacked and suffered a major leak. But what does a “location data broker” do? These companies basically trade our data (yeah, yours and mine) received from mobile apps, ad networks, smart devices – even cars. So Gravy collected it, someone stole it, and now it’s out there. There were no names or IDs in the leak; however, it appears that with a little digital wizardry, hackers can de-anonymize real people – uncovering home addresses, workplaces, favorite shopping spots, and more. Only a slice of the stolen data has become public so far (the whole database appears to be massive), but yes – it covers the whole world. What can you do to decrease your geolocation footprint? 1️⃣ Be picky with app permissions. Don’t grant location access unless it’s absolutely necessary. 2️⃣ Tighten up your privacy settings. Limit data-sharing in the apps you use. 3️⃣ Block background location tracking. 4️⃣ Ditch unused apps. Fewer apps – fewer problems. 5️⃣ Kill your ad ID. Disable it on iOS, or delete it on Android. 6️⃣ Use anti-tracking tools. Let’s be real: online privacy isn’t something to be optimistic about. But that doesn’t mean ditching basic digital hygiene is a good idea. More about the story, as well as practical steps to protect your data – here: https://kas.pr/c99m

  • The Office of the Australian Information Commissioner has published the "Privacy Foundations Self-Assessment Tool" to help businesses evaluate and strengthen their privacy practices. This tool is designed for organizations that may not have in-house privacy expertise but want to establish or improve how they handle personal information. The tool is structured as a questionnaire and an action planning section that can be used to create a Privacy Management Plan. It covers key #privacy principles and offers actionable recommendations across core areas of privacy management, including: - Accountability and assigning responsibility for privacy oversight. - Transparency through clear external-facing privacy notices and policies. - Privacy and #cybersecurity training for staff. - Processes for identifying and managing privacy risks in new projects. - Assessing third-party service providers handling personal data. - Data minimization practices and consent management for sensitive information. - Tracking and managing use and disclosure of personal data. - Ensuring opt-out options are provided and honored in direct marketing. - Maintaining an up-to-date inventory of personal data holdings. - Cybersecurity and data breach response. - Secure disposal or de-identification of data when no longer needed. - Responding to privacy complaints and individual rights requests. This self-assessment provides a maturity score based on the responses to the questionnaire and tailored recommendations to support next steps.

  • View profile for Laura Belmont

    GC @ The L Suite (TechGC) I Open Sourcing the GC Function

    4,409 followers

    I've been waiting for this for my whole life . . . or at least for the last few years of the GenAI boom. In a win for systems architecture AI governance, OpenAI just released an open-weight model called Privacy Filter. This is a specialized model designed to detect and mask PII that you can run locally. So instead of just having a policy telling employees "don't put PII in the LLM," you can now build this filter into your actual workflow to enforce that rule programmatically. A few notes: 🪪 It's released under the permissive Apache 2.0 License, meaning you can download, trial, and run it without onboarding and paying for a new tool. 💻 It's small enough to run on consumer hardware (i.e., a laptop). 🔒It knows the difference between public information that should be preserved and private data that needs to be masked across 8 specific categories. Are others as excited about this as I am? I'm excited to test this out (with dummy data, of course!). More here >> https://lnkd.in/eT-V3EfR

  • View profile for Omoniyi Ipaye

    People Operations Leader · HR Tech, Automation & Global Compliance · EMEA & Beyond | Consensys · Ex-Deel · Maersk

    6,976 followers

    If you are pasting employee or any private data into any AI tool in your organization, you don’t have an AI strategy. What you have you have is a data-leak strategy. HR teams are rushing to use AI for everything. From drafting policies, analyzing survey data, writing performance reviews. But there is one uncomfortable truth: every time you copy and paste employee data into a public AI tool, you have lost control of the data and you do not know who is reviewing the information. That’s why I made this week’s video: 👉 How to build a privacy-first AI stack for HR or any team that is concerned with privacy with open-source tools you can run on your own computer. No subscriptions. No external servers. No data leaving your device. In 3 minutes, I break down: 🔹 How to run models locally using Ollama 🔹 How you can use open model that work like ChatGPT 🔹 How you can deploy this in your team. If you’ve ever wondered “Can we use AI safely without compromising on Privacy?” This is one of the ways you can. #AIinHR #HRCompliance #PeopleOps #FutureOfWork #PrivacyByDesign

  • View profile for Ronni K. Gothard Christiansen

    Technical Privacy Engineer & CEO @ AesirX.io | First-Party Consent & Analytics solutions for global compliance.

    9,581 followers

    Analysis of "Cookies, Identifiers and Other Data That Google Silently Stores on Android Handsets" Study This study, conducted by D.J. Leith from Trinity College Dublin, investigates the data stored on Android devices by pre-installed Google apps, including Google Play Services and the Google Play Store. The findings raise significant privacy concerns related to user consent, data tracking, and compliance with EU privacy regulations (GDPR & e-Privacy Directive). Potential Legal and Privacy Implications Violation of EU e-Privacy Directive - Article 5(3) of the e-Privacy Directive requires explicit user consent before storing or accessing any data on user devices. - No consent is sought for any of the cookies or identifiers stored by Google. - No opt-out mechanism is provided, meaning users have no control over this tracking. Potential GDPR Violations - Google Android ID, DSID, NID, and other identifiers likely count as personal data under GDPR. - Google’s lack of transparency about the use of these identifiers violates GDPR’s principles of lawfulness, fairness, and transparency. - Processing of sensitive data (e.g., sexual orientation via Play Store ad tracking on "gay dating apps") requires explicit consent under GDPR Article 9. - Google automatically logging users into multiple apps without consent could violate GDPR’s purpose limitation principle. What This Means for Users - Even if you factory reset your Android device and don’t use Google apps, tracking still happens. - Google is automatically logging users into multiple services, collecting telemetry data, and storing tracking identifiers without consent. - The study suggests Google may be violating both GDPR and the EU e-Privacy Directive. This study provides strong technical evidence that Google is storing personal data without user consent and in a manner that may violate EU privacy laws. The lack of transparency and opt-out options is particularly concerning. If regulators take action, this could lead to major legal consequences for Google, similar to past GDPR fines. However, for now, Android users remain heavily tracked unless they take active measures to limit Google’s data collection. Notice: Since the study was published, Google has announced fingerprinting is now applied across all devices and services, meaning the potential impact of Googles abuse in data collection is now unparalleled, and it makes Google one of the most data collecting organizations on the planet. Direct link to the study: https://lnkd.in/gXj2fr2c #Privacy #GDPR #DataProtection #ePrivacy #GoogleTracking #AndroidPrivacy #UserConsent #BigTech #CyberSecurity #TechRegulation #SurveillanceEconomy #DigitalRights #TechEthics

  • View profile for Pallavi Bansal

    Assistant Professor at Bennett University | PhD from Erasmus University Rotterdam | LSE Post-Graduate

    4,564 followers

    Imagine saving a random contact years ago—and now they can track your location just because you ordered dinner on Zomato Zomato’s “Friend Recommendations” feature just gave me a mini existential crisis. I never gave the app access to my contacts, never synced anything, never chose to “follow” anyone—and yet, a bunch of random people from my phonebook were listed as “friends.” I could see their food choices, recommendations, and even my brother’s activity—someone who swears he’s never officially recommended a single dish on the app. I could even see recommendations from my 3rd floor neighbour or someone whose contact I saved 15 years ago but don’t even remember who they are. So perhaps our actions—like ratings or just ordering frequently—are being interpreted as recommendations and shown to others. So basically I can see where all they order from and perhaps where they live? Perhaps. How? Welcome to the eerie world of data triangulation and invisible profiling. Even passive behavior—like ordering food without leaving a review—is being interpreted, tagged, and shared under the veil of “social recommendations.” This isn’t just about food anymore. It’s a reminder of how platforms construct detailed behavioral profiles from seemingly innocuous actions. It’s also a reminder of how transparency, consent, and user agency remain alarmingly vague in our digital ecosystems. Scary? Yes. Surprising? Sadly, not anymore. #DigitalPrivacy #AlgorithmicProfiling #TechEthics #Zomato #SurveillanceEconomy #DataTransparency

Explore categories