How to Involve Developers in Security

Explore top LinkedIn content from expert professionals.

Summary

Involving developers in security means making sure software creators understand and address cybersecurity risks throughout the development process, rather than leaving security as a final step. By integrating security practices into everyday coding and collaboration, teams can create safer applications while maintaining efficiency.

  • Share clear guidelines: Provide easy-to-follow security standards and documentation before development begins, so everyone knows what’s expected.
  • Automate feedback loops: Set up tools that automatically check for vulnerabilities during coding and give fast, actionable results to developers.
  • Frame risk with context: Explain security concerns in relation to how real systems work, focusing on the impact and likelihood of threats instead of treating every issue as urgent.
Summarized by AI based on LinkedIn member posts
  • View profile for saed ‎

    Senior Security Engineer at Google, Kubestronaut🏆 | Opinions are my very own

    78,244 followers

    Security doesn't depend on Dev for vulnerabilities to exist. But… 1. Dev depends on Security for compliance sign-off. 2. Ops depends on Security for deployment approvals. 3. Product depends on Security for feature releases. 4. Business depends on Security for customer trust. The entire delivery pipeline hinges on how Security operates. Yet some Security teams treat developer experience like it's not their problem. Slow approval processes that take days. Unclear requirements that change mid-sprint. Manual checks that could be automated. Security gates that block without clear remediation paths. "We found issues" without explaining what or how to fix. "This can't go to production" without alternative solutions. "That's not secure" without documented standards. Then they wonder why developers route around security controls. Why shadow IT emerges. Why technical debt piles up. Why vulnerabilities slip through. Here's what actually works: 1. Clear security guidelines before development starts. 2. Automated security checks in the CI/CD pipeline. 3. Fast feedback loops with actionable results. 4. Self-service tools that don't require security approval for every change. 5. Documentation that developers can actually follow. 6. Risk-based prioritisation instead of blocking everything. Security should enable delivery, not prevent it. Your job isn't to say no. It's to show developers how to say yes securely. Build guardrails, not roadblocks. Automate gates, don't add manual checkpoints. Provide tools, not tickets. When Security becomes a bottleneck, the business moves on without you. When Security enables velocity, you become indispensable. The best Security teams make secure development the path of least resistance. They understand that developer experience is security's problem too. Because if it's hard to do securely, people will do it insecurely. Make security easy, fast, and clear. Or watch your controls get bypassed

  • View profile for Dr. Gurpreet Singh

    🚀 Driving Cloud Strategy & Digital Transformation | 🤝 Leading GRC, InfoSec & Compliance | 💡Thought Leader for Future Leaders | 🏆 Award-Winning CTO/CISO | 🌎 Helping Businesses Win in Tech

    13,584 followers

    🔐 Secure Coding Training Isn’t About Checklists—It’s About Survival In 2023, a LastPass engineer’s use of an outdated SDK led to a breach exposing 33M passwords. Cost: $25M+ in damages. The root cause? A developer trained on “general best practices” but not on company-specific attack vectors. Training isn’t a compliance 🏋️♂️ checkbox. It’s your last line of defense. 💥 Why Most Training Fails (And How to Fix It) 🔥 Generic Content ≠ Real Threats Example: A module on “SQLi basics” won’t stop a dev from misconfiguring your GraphQL API. ✅ Fix: Build custom labs replicating your tech stack’s weaknesses (e.g., AWS Lambda permissions, Kubernetes secrets). 🔥 Annual Workshops Are Obsolete The average app’s attack surface changes every 8 weeks (OWASP 2024). ✅ Fix: Monthly micro-trainings > yearly marathons. 🔥 Boring = Ignored Shopify’s “Hack the Stack” program lets devs exploit vulnerabilities in a sandboxed clone of their production environment. Retention: 89%. 🚀 Actionable Steps for Leaders 1️⃣ Ditch Theoretical Lectures Use platforms like Secure Code Warrior for hands-on, language-specific drills (Python, JS, Go). 2️⃣ Gamify Consequences 🎯 Reward devs who find flaws with bonuses or public recognition. ⚠️ Penalize repeat offenders (e.g., hardcoding secrets) with “Security Shadowing” sessions. 3️⃣ Track Impact, Not Completion Rates Adobe measures success by the drop in vulnerabilities per 1k lines of code post-training. ⚖️ The Controversy Is Right… and Wrong Critics aren’t wrong that most programs suck. But the fix isn’t less training—it’s better training. 📈 Example: Google’s “SSRF for Go Developers” module reduced misconfigurations by 72% in 6 months. Does your org still use training modules from the pre-API economy era? #SecureCoding #DevSecOps #Cybersecurity #TechTraining #SoftwareDevelopment

  • View profile for Cameron W.

    Product Security Leader | Director of AppSec & Security Engineering | DevSecOps & CI/CD Security | Co-lead OWASP SPVS | Co-host of Coffee, Chaos & ProdSec Podcast | Advisor

    4,826 followers

    Security friction with developers usually comes from how risk gets framed, not from people not caring. I still hear security folks say developers do not care about security, or worse, that they are careless or dumb. I have not seen that hold up in real teams. I came from writing code before moving into AppSec, and the developers I worked with cared deeply about quality, uptime, and not being the person who caused an incident. What they pushed back on was urgency without context. This tension shows up when security treats every finding as an emergency. A vulnerability shows up in a scan and the message becomes simple and absolute. Fix this now or we will get breached. When the response to any disagreement is you do not care about security, trust breaks fast. What has worked for me is grounding risk in how systems actually behave in production. Here is how I approach it in practice: - Start with exploitability, not severity labels. Is there a path from the internet to the code, or is this buried behind auth, network controls, and feature flags. - Talk through compensating controls that already exist. Rate limiting, WAF rules, service to service auth, monitoring, and rollback plans all matter. - Be clear about likelihood and blast radius. A low likelihood issue in a low impact service should not be framed the same as a reachable auth bypass. - Separate urgent fixes from planned work. Some issues need a hotfix. Most belong in normal backlog flow with a clear owner and timeline. - When you frame security this way, developers stop feeling accused and start engaging. The conversation shifts from you must fix this now to here is the risk, here is what reduces it today, and here is what we should improve next. Security is not binary, systems are layered and risk is contextual. Where do you see these conversations breaking down most in your org?

  • View profile for Shawn Wallack

    Follow me for unconventional Agile, AI, and Project Management opinions and insights shared with humor.

    9,587 followers

    Evil User Stories: Think Like the Enemy User stories are a cornerstone of Agile development. They’re a concise way to capture the perspective and goals of ours users. But what if we flipped the narrative and considered what we DON'T want? "Evil user stories" allow teams to simulate the motivations and methods of malicious actors. These narratives aren't just thought experiments; they're a practical tool to enhance cybersecurity awareness, identify vulnerabilities, and inspire developers to anticipate and mitigate real threats. Enter The Evil User Story (EUS) An EUS assumes the persona of a malicious actor (e.g.,hacker, disgruntled employee, cybercriminal). By discussing their goals and methods, teams can expose security gaps and reinforce defenses. Sample Evil Scenarios #1: MFA Bypasser As a hacker, I want to bypass multi-factor authentication, so I can gain unauthorized access to sensitive data. Countermeasure: Deploy adaptive MFA using risk-based analysis to detect suspicious login attempts to avoid exposure of PII, regulatory fines, reputational damage, and financial losses. #2: Data Exfiltrator As an insider threat actor, I want to download customer data from a poorly monitored database, so I can sell it on the dark web. Countermeasure: Monitor access logs and enforce robust data loss prevention (DLP) policies to avoid reputation damage, compliance penalties, and erosion of customer trust. #3: Ransomware Deployer As a ransomware developer, I want to encrypt an entire corporate network, so I can demand payment in cryptocurrency. Countermeasure: Implement comprehensive backup strategies and endpoint protection to avoid business interruptions, financial losses, and brand harm. #4: Saboteur As a disgruntled employee, I want to introduce malicious code into production, so I can disrupt operations and harm the company’s reputation. Countermeasure: Enforce strict access controls and conduct thorough code reviews to avoid prolonged downtime and loss of customer trust. #5: Corporate Spy As a competitor-sponsored hacker, I want to infiltrate R&D systems, so I can steal trade secrets for a competitive edge. Countermeasure: Segment networks and use advanced threat detection techniques to avoid loss of IP and market advantage. #6: Social Engineer As a social engineer, I want to impersonate a trusted vendor to access internal systems, so I can escalate privileges. Countermeasure: Verify vendor access and enforce least privilege principles to avoid broad internal compromise. Defense Through Adversarial Insight Evil user stories push teams to think like adversaries, uncovering vulnerabilities, strengthening defenses, and enhancing threat modeling. This adversarial perspective fosters a creative, security-first mindset, helping developers address vulnerabilities during development and improve system resilience. It’s more fun to play the bad guy than be the victim. So, if you were the villain, how would you attack - and stop yourself?

  • View profile for Daniel Hooper

    CISO | Cybersecurity Startup Advisor | Investor | Career Mentor

    7,399 followers

    Just ship it! Test in production.... It'll be ok! Shipping secure software at high velocity is a challenge that many smaller, fast-paced, tech-forward companies face. When you're building and deploying your own software in-house, every day counts, and often, the time between development and release can feel like it's shrinking. In my experience working in these environments, balancing speed and security requires a more dynamic approach that often ends up with things happening in parallel. One key area where I've seen significant success is through the use of automated security testing within the Continuous Integration and Continuous Development (CICD) pipelines. Essentially, this means that every time developers push new code, security checks are built right into the process, running automatically. This gives a baseline level of confidence that the code is free from known issues before it even reaches production. Automated tools can scan for common vulnerabilities, ensuring that security testing isn’t an afterthought but an integral part of the development lifecycle. This approach can identify and resolve potential problems early on, while still moving quickly. Another great tool in the arsenal is the Software Bill of Materials (SBOM). Think of it like an ingredient list for the software. In fast-paced environments, it's common to reuse code, pull in external libraries, or leverage open-source solutions to speed up development. While this helps accelerate delivery, it can also introduces risks. The SBOM helps track all the components that go into software, so teams know exactly what they’re working with. If a vulnerability is discovered in an external library, teams can quickly identify whether they’re using that component and take action before it becomes a problem. Finally, access control and code integrity monitoring play a vital role in ensuring that code is not just shipping fast, but shipping securely. Not every developer should have access to every piece of code, and this isn’t just about preventing malicious behavior—it's about protecting the integrity of the system. Segregation of duties between teams allows us to set appropriate guardrails, limiting access where necessary and ensuring that changes are reviewed by the right people before being merged. Having checks and balances in place keeps the code clean and reduces the risk of unauthorized changes making their way into production. What I’ve learned over the years is that shipping secure software at high speed requires security to be baked into the process, not bolted on at the end (says every security person ever). With automated testing, clear visibility into what goes into your software, and a structured approach to access control, you can maintain the velocity of your team while still keeping security front and center. #founders #startup #devops #cicd #sbom #iam #cybersecurity #security #ciso

  • View profile for Dustin Lehr

    Co-founder, Chief Product & Technology Officer at Katilyst | vCISO | IANS Faculty | Keynote Speaker | Thought Leader | Community Builder | Security Champion Champion | Software Engineer at heart

    8,976 followers

    As an application security leader, I admit I've thrown too many invalid results over the wall at the developers. It's embarrassing because as a developer myself, part of the reason I wanted to join the cybersecurity industry was to share my perspective and be an advocate for the developers, having experienced plenty of negative interaction with security teams in the past. Here's what I faced that led me to the decisions I made: - Leadership (and self-imposed) pressure to show early wins and value from my efforts that should have been resisted by setting clearer expectations. - Promises from the chosen tool vendor about result accuracy that should have been better validated and triaged before sharing with the devs. - Security team resource constraints that led me to decide to rely too much on the developer's time and expertise as opposed to my team's. - Compliance requirements that went only as far as showing scan results, without regard to effective remediation. The decisions were made carefully, methodically, and logically, but they led me down the wrong path. All I can say is I have clear firsthand lessons from all of this, and I now have a very strong and compelling case for the strategy I employ based on this experience. The developers' time is the lifeblood of a successful software product/solution. I believe we in the cybersecurity industry need to help the developers as much as possible and pursue the following, with the assistance of AI/LLMs: - More accurate security scan tool results through technology advancements - At least initial "low-hanging fruit" triage efforts from the security team - Tuning and phasing the rollout of tool scan results, starting with the result classes most likely to be accurate - Accurate codebase mapping for refactor and fix recommendations - Automated exploit examples to help build a strong case for fixing - Severity/prioritization that takes into account the "full picture" of the environment through asset inventory and an understanding of the compensating controls #securitychampions #securityculture #securityawareness #applicationsecurity #productsecurity #softwaresecurity #gamification #proactivesecurity

  • View profile for Rajdeep Saha

    Founder - Stealth EdTech Startup | Bestselling Author & Educator | Former Principal Solutions Architect @AWS | YouTuber (100K+) | Public Speaker

    56,443 followers

    Security ≠ Slower Releases. If your security process blocks shipping, you don’t need less security - you need better systems. Last month at KCD DC, Rodrigo Bersa and I shared how teams can scale securely without losing velocity. Quick summary 👇 The bottleneck (we’ve all seen it): Dev teams own apps. SecOps owns infra & guardrails. Every new cluster, RDS, or policy becomes a ticket. Manual reviews pile up. Velocity dies. What works instead: a Kubernetes-centric platform that bakes security into the golden path. 4 moves that change the game: Policy as Code – Enforce security at the door with Kyverno/OPA (admission policies, RBAC, network policies, Pod Security Standards). No policy, no deploy. GitOps as an Immutability Firewall – Flux/Argo CD keep clusters drift-free. If someone changes the cluster directly, Git reconciles it back - consistency by default. Shift Left – Catch issues in dev, not prod. (Fixing a prod security issue can be orders of magnitude more expensive.). Test your security before someone else does. Aggregated Modules (Blueprints) – Package app + infra + security together. Developers get a simple, self-service template; SecOps stops being the bottleneck; velocity goes up. If this resonates, watch the full talk + Q&A here 👉 https://lnkd.in/gJ2fCFcj Big thanks to KCD Washington DC for hosting an amazing event. ---- Get byte sized tips on career switch, cloud, AI, system design, behavioral, and interviews in weekly newsletter (FREE) : https://lnkd.in/eG7XdHmN #kubernetes #aws #kcd

Explore categories