Managing Privacy in Developer Workflows

Explore top LinkedIn content from expert professionals.

Summary

Managing privacy in developer workflows means building privacy and data protection directly into the way software and AI systems are created and managed, rather than treating privacy as an afterthought. This approach helps organizations protect sensitive information, meet legal requirements, and create trustworthy technologies from the ground up.

  • Integrate privacy early: Map out how personal data moves through your systems during every stage of development, identifying where it enters, changes, and might be stored or leaked.
  • Automate controls: Use tools and AI to automate privacy checks and manage data access, so privacy protection happens naturally in your daily workflow without slowing down progress.
  • Assign clear ownership: Make sure every system and dataset has a designated person responsible for privacy, so there’s always someone accountable for safeguarding information and addressing issues quickly.
Summarized by AI based on LinkedIn member posts
  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,554 followers

    Isabel Barberá: "This document provides practical guidance and tools for developers and users of Large Language Model (LLM) based systems to manage privacy risks associated with these technologies. The risk management methodology outlined in this document is designed to help developers and users systematically identify, assess, and mitigate privacy and data protection risks, supporting the responsible development and deployment of LLM systems. This guidance also supports the requirements of the GDPR Article 25 Data protection by design and by default and Article 32 Security of processing by offering technical and organizational measures to help ensure an appropriate level of security and data protection. However, the guidance is not intended to replace a Data Protection Impact Assessment (DPIA) as required under Article 35 of the GDPR. Instead, it complements the DPIA process by addressing privacy risks specific to LLM systems, thereby enhancing the robustness of such assessments. Guidance for Readers > For Developers: Use this guidance to integrate privacy risk management into the development lifecycle and deployment of your LLM based systems, from understanding data flows to how to implement risk identification and mitigation measures. > For Users: Refer to this document to evaluate the privacy risks associated with LLM systems you plan to deploy and use, helping you adopt responsible practices and protect individuals’ privacy. " >For Decision-makers: The structured methodology and use case examples will help you assess the compliance of LLM systems and make informed risk-based decision" European Data Protection Board

  • View profile for Nick Abrahams
    Nick Abrahams Nick Abrahams is an Influencer

    Futurist, International Keynote Speaker, AI Pioneer, 8-Figure Founder, Adjunct Professor, 2 x Best-selling Author & LinkedIn Top Voice in Tech

    31,756 followers

    If you are an organisation using AI or you are an AI developer, the Australian privacy regulator has just published some vital information about AI and your privacy obligations. Here is a summary of the new guides for businesses published today by the Office of the Australian Information Commissioner which articulate how Australian privacy law applies to AI and set out the regulator’s expectations. The first guide is aimed to help businesses comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product. The second provides privacy guidance to developers using personal information to train generative AI models. GUIDE ONE: Guidance on privacy and the use of commercially available AI products Top five takeaways * Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information).  * Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI * If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3 (which deals with collection of personal info). * If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected. * As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools. GUIDE 2: Guidance on privacy and developing and training generative AI models Top five takeaways * Developers must take reasonable steps to ensure accuracy in generative AI models. * Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems.. * Developers must take particular care with sensitive information, which generally requires consent to be collected. * Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. * Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use. https://lnkd.in/gX_FrtS9

  • View profile for Marc Maiffret

    Chief Technology Officer at BeyondTrust

    5,818 followers

    Since the ’90s I’ve built, shipped, and occasionally exploited just about every kind of identity control. We’re now pretty good at building gates around privilege, but not nearly as good at removing it once the job is done. This hurts in 2025. Privileged access no longer lives only with well-defined admin accounts. It threads through every developer workflow, CI/CD script, SaaS connector, and microservice. The result: standing privilege is inevitable, an orphaned token here, a break-glass account there, quietly turning into “forever creds.” Here’s what’s working in the field: → One JIT policy engine that spans cloud, SaaS, and on-prem - no more cloud-specific silos.  ↳ Same approval workflow everywhere, so nobody bypasses “the one tricky platform.”  ↳ Central log stream = single source of truth for auditors and threat hunters. → Bundle-based access: server + DB + repo granted (and revoked) as one unit.  ↳ Devs get everything they need in one click - no shadow roles spun up on the side.  ↳ When the bundle expires, all linked privileges disappear, killing stragglers. → Continuous discovery & auto-kill for any threat that slips through #1 or #2.  ↳ Scan surfaces for compromised creds, role drifts, and partially off-boarded accounts.  ↳ Privilege paths are ranked by risk so teams can cut off the dangerous ones first. Killing standing privilege isn’t a tech mystery anymore, it’s an operational discipline.  What else would you put on the “modern PAM” checklist?

  • View profile for Pradeep Sanyal

    AI Leader | Scaling AI from Pilot to Production | Chief AI Officer | Agentic Systems | AI Operating model, Governance, Adoption

    22,321 followers

    Privacy isn’t a policy layer in AI. It’s a design constraint. The new EDPB guidance on LLMs doesn’t just outline risks. It gives builders, buyers, and decision-makers a usable blueprint for engineering privacy - not just documenting it. The key shift? → Yesterday: Protect inputs → Today: Audit the entire pipeline → Tomorrow: Design for privacy observability at runtime The real risk isn’t malicious intent. It’s silent propagation through opaque systems. In most LLM systems, sensitive data leaks not because someone intended harm but because no one mapped the flows, tested outputs, or scoped where memory could resurface prior inputs. This guidance helps close that gap. And here’s how to apply it: For Developers: • Map how personal data enters, transforms, and persists • Identify points of memorization, retention, or leakage • Use the framework to embed mitigation into each phase: pretraining, fine-tuning, inference, RAG, feedback For Users & Deployers: • Don’t treat LLMs as black boxes. Ask if data is stored, recalled, or used to retrain • Evaluate vendor claims with structured questions from the report • Build internal governance that tracks model behaviors over time For Decision-Makers & Risk Owners: • Use this to complement your DPIAs with LLM-specific threat modeling • Shift privacy thinking from legal compliance to architectural accountability • Set organizational standards for “commercial-safe” LLM usage This isn’t about slowing innovation. It’s about future-proofing it. Because the next phase of AI scale won’t just be powered by better models. It will be constrained and enabled by how seriously we engineer for trust. Thanks European Data Protection Board, Isabel Barberá H/T Peter Slattery, PhD

  • View profile for Pádraig O'Leary, Ph.D.

    Co-founding CEO at Trustworks🟡 - Privacy and AI Governance.

    10,452 followers

    The story I keep hearing: privacy teams buried in manual reviews, endless forms, and disconnected tools. All necessary, but none of it strategic. It’s compliance as admin, not as leadership. This happens because of a persistent context gap — privacy teams know what data exists, but not the why. That missing context drives the over-reliance on assessments, long review cycles, and duplicated work across teams. If I were leading a privacy function today, here’s what I’d prioritise 👇 1/ Close the context gap before assessing Stop triggering assessments just to find answers. There is emerging AI tooling to connect data from projects, systems, and vendors. 2/ Automate vendor and contract triage Let AI run first-line checks for sub-processors, liability, and transfer risks, freeing teams to focus on the outliers. 3/ Build DSR operations you can trust Automate and track every action and time-to-closure. 4/ Make accountability visible Assign clear owners for systems, datasets, and escalation paths, ensuring human oversight remains in place. 5/ Embed privacy where work happens Governance shouldn’t live in isolation. Bring privacy and AI checks directly into development, procurement, and project workflows so compliance becomes a natural outcome of collaboration. In my recent conversation with Sergio Maldonado on the Masters of Privacy podcast, we discussed how modern privacy teams can close this gap and move from maintenance to impact. The future of privacy operations isn’t about more assessments. It’s about context-aware programs where automation and AI provide the foundation, and built-in know-how provides confidence. Listen to the full episode: Spotify: https://lnkd.in/d4CXx47J YouTube: https://lnkd.in/dd2jpbeH Apple Podcasts: https://lnkd.in/dRD4dkMV

  • View profile for Zinet Kemal, M.S.c

    Protecting kids & families from cyber threats • Senior Cloud Security Engineer • TEDx Speaker • Multi-award winning cybersecurity practitioner • Author • Instructor AIGP • SecAI+ • CCSK • CISA • AWS Security speciality

    36,676 followers

    Most developers think .env files are safe. They’re not. Especially in the age of AI coding agents. Modern AI assistants & coding agents scan entire repos to understand context including configuration files & environment variables. That means secrets like API keys, tokens, & database credentials stored in .env files can end up inside the model’s working context. Once those secrets enter the AI’s context, they can potentially leak through
• prompts
• logs
• tool calls
• or external model processing pipelines. The problem? Well, these systems need broad access to code, configs, & documentation to perform tasks creating new paths for credential exposure. .env files were never designed for autonomous systems reading entire codebases. As organizations experiment with AI agents, coding copilots, & autonomous workflows, security teams should rethink how secrets are handled. Some practical steps + Go with the use of a secrets manager instead of plaintext .env files
+ Apply least privilege to AI agents
+ Restrict what files agents can access
+ Monitor agent actions & tool usage Agentic AI is powerful. But any file an agent can read is effectively part of its prompt. And that means your secrets might be too. P.s. repost ♻️ #AIAgents #AISecurity #cybersecurity

  • There is often confusion between customer secrets and developer secrets. Understanding this distinction is vital for implementing effective security measures. Customer Secrets—such as API keys for third-party services—are not the same as Production Secrets, which are typically managed by DevOps teams using secret managers for production environments and its underlying infrastructure services. While production secrets are essential for the backend system to operate, customer secrets require a different approach due to their unique scale and sensitivity and the fact they are dynamic (not static list of secrets like once your production is up). Here’s why treating these two types of secrets differently is crucial: → Risk of Exposure: Storing customer secrets in traditional (SQL) databases in plaintext can lead to significant vulnerabilities. If a breach occurs, the impact can be severe, affecting both security and privacy of your customers in other applications they use. → Different Management: Customer secrets are another piece of super sensitive data coming from your customers and therefore require better data model and not a flat key-value collection. Also, you shouldn’t mix your production secrets with your customer secrets. They should be separated and isolated. → Cost: Many existing solutions expect low number of secrets and limit their number or come with high costs and latency issues for the scale required for customer secrets. → Zero Trust Secret Flow Approach: By offloading the management of these sensitive assets to Piiano Vault, you can ensure that even in the event of a breach, your customers’ secrets remain secure. We use a proxy that can make sure secrets never leave the vault and can only move forward to an authorized service provider. Does your organization recognize the difference between customer and production secrets?

  • View profile for Prayson Wilfred Daniel

    🐉 Principal Data Scientist @ Norlys | _42

    8,432 followers

    💁🏾♂️ Have you accidentally committed an API key or sensitive credential to GitHub? Did you know GitHub has been called a goldmine for free API keys? Mistakes happen, and the stakes are high. Protecting sensitive data is no longer optional, it’s essential. Make sure your .𝐞𝐧𝐯 file is .𝐠𝐢𝐭𝐢𝐠𝐧𝐨𝐫𝐞-𝐝 to keep it out of source control and away from miners prying eyes. .𝐞𝐧𝐯 file is a great start for keeping credentials and API keys organized, but it’s not foolproof. Even with .𝐠𝐢𝐭𝐢𝐠𝐧𝐨𝐫𝐞, relying solely on .𝐞𝐧𝐯 files can leave room for human error or exposure. Is there a better solution? Yeap! Password managers and secrets management tools. Platforms like 1Password or HashiCorp Vault provide an extra layer of security. By integrating these tools into your workflow, you ensure secrets are fetched securely on demand and never live in 𝐩𝐥𝐚𝐢𝐧𝐭𝐞𝐱𝐭. This approach enhances security, minimizes risks, and empowers developers to focus on building great software, safely and confidently. #programing #python

  • View profile for Shing Lyu

    ☁️ Software Architect at BUUT | 💯☁️12x AWS Certified | 🦀 Rust book and video course author

    4,562 followers

    🚀 Just shipped a new CLI tool and wrote about the "vibe coding" experience! I needed to share documents with AI tools but wanted to strip out personal info first. Existing solutions were either cloud-based (privacy concerns) or didn't fit my UNIX pipe workflows. So I built a PII Anonymizer CLI using Microsoft's Presidio library + some serious AI-assisted development: ✅ Fully offline - no data leaves your machine ✅ NLP-based detection ✅ Multi-language support ✅ Perfect UNIX pipes: `cat file.pdf | markitdown | anonymize > clean.md` The real experiment? I used GitHub Copilot + Claude 3.7, then evolved to Cline + Claude 4. Even pushed it to auto-commit code changes (wild!). Key insight: AI coding tools excel at small, focused utilities. I went from idea to working tool in ~30 minutes. That's the "30-minute tool revolution" - AI removes the friction that used to kill good ideas before they started. 🔗 Read the full story: https://lnkd.in/e5ktH_tq ⚡ Try the tool: https://lnkd.in/eQp5HeM6 #VibeCoding #AI #CLI #Privacy #DeveloperTools #Microsoft #Presidio

  • View profile for Craig McLuckie

    Founder and CEO

    8,547 followers

    Even though MCP often sits on familiar OAuth flows, there’s one important invariant: Every call to an MCP server is made by an agent. That gives you a single, well-defined control point to extend your existing policy-as-code and authorization models to be explicitly agent-aware. Two practical principles we lean into with customers: Principle 1. Give agents less power than the humans they represent. In most setups, agents act on behalf of users and carry user identity. That’s convenient—but risky. A solution is to exchange and descope tokens so agents only get the minimum permissions required for their task. Example: The AWS MCP has a lot of tools. Allow a developer to use all those tools, but put guardrails on how their coding agent can use those tools. Stand up a token exchange that maps internal IDP tokens (e.g. Okta) to an AWS IDP token for the developer (using the identity federation features of Okta and AWS), but descope the token to be read-only. Principle 2. Extend your authorization model—don’t replace it. In most organizations, agents work on behalf of users, and so the authorization flow should still rely on the user’s claims. But, it’s reasonable (and often necessary) to add agent-specific authorization policies to the workflow. Example: If you want a read-only version of an MCP server, but it isn’t practical to descope user claims, you can set up a policy to constrain the specific use of given tools. Enterprises need to start defining org-wide authorization policies that augment existing policies, and that’s where Stacklok is extending the policy-as-code boundary to include tool calling. A lot of the early attention on MCP focused on how it introduced security challenges. We see MCP’s potential to unlock new security solutions for agentic workflows.

Explore categories