𝐒𝐭𝐨𝐜𝐤 𝐦𝐚𝐫𝐤𝐞𝐭𝐬 𝐚𝐫𝐞 𝐩𝐚𝐧𝐢𝐜𝐤𝐢𝐧𝐠 𝐚𝐛𝐨𝐮𝐭 𝐀𝐈 𝐜𝐨𝐝𝐢𝐧𝐠 𝐚𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭𝐬 𝐫𝐞𝐩𝐥𝐚𝐜𝐢𝐧𝐠 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬. $2 trillion wiped off software market caps in days. Indian IT companies alone lost $50 billion. But almost nobody is talking about the 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐃𝐞𝐛𝐭 𝐂𝐫𝐢𝐬𝐢𝐬 we are creating with these Assistants. 𝐖𝐞 𝐚𝐫𝐞 𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐜𝐨𝐝𝐞 56% 𝐟𝐚𝐬𝐭𝐞𝐫. 𝐖𝐞 𝐚𝐫𝐞 𝐚𝐥𝐬𝐨 𝐛𝐫𝐞𝐚𝐤𝐢𝐧𝐠 𝐨𝐮𝐫 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 153% 𝐟𝐚𝐬𝐭𝐞𝐫. Copilot. Cursor. Q. These aren't just "tools." They are privileged agents. We are granting them deep access to file systems, shells, credentials, and codebases. We are letting them execute commands with the developer's own permissions. BUT we are protecting them with security models that are 𝐩𝐫𝐨𝐛𝐚𝐛𝐢𝐥𝐢𝐬𝐭𝐢𝐜, 𝐧𝐨𝐭 𝐝𝐞𝐭𝐞𝐫𝐦𝐢𝐧𝐢𝐬𝐭𝐢𝐜. Let’s look at what researchers have actually demonstrated recently: -𝐖𝐨𝐫𝐤𝐬𝐩𝐚𝐜𝐞 𝐇𝐢𝐣𝐚𝐜𝐤𝐢𝐧𝐠: Tools manipulated to execute arbitrary system commands via simple "pre-planning" steps. -𝐃𝐚𝐭𝐚 𝐄𝐱𝐟𝐢𝐥𝐭𝐫𝐚𝐭𝐢𝐨𝐧: Hidden tricks in rendered content (like SVGs) used to bypass security and leak repo secrets. -𝐏𝐫𝐨𝐦𝐩𝐭 𝐈𝐧𝐣𝐞𝐜𝐭𝐢𝐨𝐧: Malicious instructions hidden in READMEs or white-text comments that rewrite your configuration or steal API keys. -𝐇𝐚𝐥𝐥𝐮𝐜𝐢𝐧𝐚𝐭𝐞𝐝 𝐃𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐜𝐢𝐞𝐬: Assistants confidently recommending packages that don't exist - or worse, installing malicious ones. The scary part? These tools execute with your permissions. When a coding assistant is weaponized by a hidden comment, the attack surface isn't the tool. It’s the 𝐭𝐫𝐮𝐬𝐭 𝐦𝐨𝐝𝐞𝐥. 𝐒𝐭𝐨𝐩 𝐭𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐭𝐡𝐞𝐬𝐞 𝐚𝐬 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐚𝐝𝐝-𝐨𝐧𝐬. 𝐒𝐭𝐚𝐫𝐭 𝐭𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐭𝐡𝐞𝐦 𝐚𝐬 𝐩𝐫𝐢𝐯𝐢𝐥𝐞𝐠𝐞𝐝 𝐚𝐜𝐜𝐞𝐬𝐬 𝐞𝐧𝐝𝐩𝐨𝐢𝐧𝐭𝐬. Build your policy enforcement pipeline before you onboard these tools, not after a breach. 𝐈𝐟 𝐲𝐨𝐮 𝐚𝐫𝐞 𝐚𝐧 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐋𝐞𝐚𝐝𝐞𝐫, 𝐲𝐨𝐮 𝐧𝐞𝐞𝐝 3 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬 𝐧𝐨𝐰: 𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐅𝐢𝐥𝐭𝐞𝐫𝐢𝐧𝐠 Adopt a "shift left" approach. Filter credentials and PII before the codebase is exposed to the model. Data-first security means the secret never reaches the assistant. 𝐇𝐚𝐫𝐝𝐞𝐧𝐞𝐝 𝐌𝐂𝐏 𝐆𝐚𝐭𝐞𝐰𝐚𝐲𝐬 To combat vulnerabilities like CVE-2025-6514, you cannot allow direct external connections. Use model routers and sanctioned registries to govern tool access. 𝐑𝐞𝐚𝐥-𝐓𝐢𝐦𝐞 𝐀𝐧𝐨𝐦𝐚𝐥𝐲 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 Detects sudden requests for security-sensitive code. This is often the only way to catch prompt injection attempts before workstation compromise occurs. The question is not whether AI coding assistants are useful. The question is whether you are treating code as a sovereign asset, or just a byproduct of speed. What controls has your team implemented for AI assistants? Follow Vinod Bijlani for more insights
Protecting Software Development Workflows in 2025
Explore top LinkedIn content from expert professionals.
Summary
Protecting software development workflows in 2025 means putting safeguards in place to defend the process of creating, updating, and deploying software from threats like cyber attacks, insider risks, and unwitting data leaks—especially as AI tools and automation reshape how teams work. It's about treating the code and the systems around it as valuable assets, not just focusing on productivity, and building controls that keep up with the speed and complexity of modern development.
- Control access wisely: Set up role-based controls and regularly review who has privileged access, making sure permissions are based on job function and quickly removed if they're no longer needed.
- Monitor and audit: Track developer actions and system changes in real time, using automated tools and maintaining clear logs so you can spot unusual behavior or security gaps early.
- Secure the supply chain: Pin exact software versions, enforce lockfiles, and scan for tampered dependencies to prevent malicious code from sneaking in through trusted packages or automated workflows.
-
-
TRUE STORY: A trusted developer embedded a "kill switch" that locked out thousands of corporate users worldwide—triggered the moment his credentials were revoked. The cost? Hundreds of thousands in damages. The lesson? Insider threats from privileged users are real, and they’re escalating. 🧾 Case Summary In August 2025, Davis Lu, a former software developer at large corporation, was sentenced to four years in federal prison for deploying malicious code across his employer’s network. See https://lnkd.in/edJggBKu. After a corporate restructuring reduced his access, Lu planted sabotage scripts including a “kill switch” that activated when his account was disabled. The code crashed servers, deleted coworker profiles, and locked out thousands of users globally. His actions caused extensive disruption and financial loss, and his digital footprint revealed deliberate planning to evade detection. ✅ Help Prevent Cyber Sabotage from a Privileged Insider 1. Implement Role-Based Access Controls (RBAC) Limit access to sensitive systems based on job function. No single employee should hold unchecked privileges. 2. Conduct Regular Privilege Audits Regularly review who has elevated access—and why. Remove dormant or unnecessary accounts promptly. Such reviews should ideally take place at least quarterly. 3. Monitor for Anomalous Behavior Use behavioral analytics to flag unusual activity like privilege escalation, mass deletions, or off-hours access. 4. Enforce Code Review and Change Management Require peer review and approval for all code deployments, especially in production environments. 5. Deploy Insider Threat Detection Tools Invest in platforms that correlate user behavior, access logs, and system changes to identify risks early. 6. Establish a Clear Offboarding Protocol Disable access in a controlled sequence. Monitor systems closely during and after termination events. 7. Encrypt and Log Developer Actions Maintain immutable logs of code changes and admin actions. Encryption helps ensure integrity; logging helps ensure accountability. 8. Foster a Culture of Transparency and Respect Many insider threats stem from resentment or perceived injustice. Proactive communication and fair treatment matter. 9. Engage Legal and Cyber Teams Early Legal counsel should be looped in on high-risk terminations, especially those involving privileged users. 10. Build Relationships with Law Enforcement The FBI encourages proactive engagement to mitigate insider threats. Don’t wait until it’s too late. What other recommendations would you add? Please feel free to include in the comments.
-
Shift-Left Security Isn’t Slowing You Down—Your Bug Backlog Is The 2017 Equifax breach stemmed from a vulnerability that could’ve been caught during coding—not in a pentest. Fast-forward to 2024: 78% of critical flaws are still found post-deployment (Veracode Report). Shift-left isn’t a buzzword. It’s a $20M lesson. Myth: “Security-first coding delays launches.” Reality: Teams using shift-left practices fix bugs 11x faster (Snyk, 2024). How Top Teams Hack Security Into Velocity: 1. Code With Guardrails Netflix embeds security rules directly into IDEs. Example: Auto-reject code with eval() functions. Flag hardcoded secrets as you type. 2. Automate the Boring Stuff Spotify’s “Security Champions” program trains devs via gamified labs (think: Capture the Flag for SQLi). 3. Shift-Left ≠ Shift-Blame Adobe’s DevSecOps teams measure “Time to Fix” instead of “Bugs Found”—rewarding collaboration over finger-pointing. The Controversy Is Missing the Point: Yes, adding SAST tools to your CI/CD pipeline might add 2 hours to sprint cycles. But fixing a single prod exploit post-launch takes 40+ hours (and your CISO’s sanity). Actionable Steps: -> Tool Stack: Start with Snyk, Checkmarx, or GitGuardian. They plug into existing workflows. -> Training: Require 1 security PR review per dev monthly. -> Metrics: Track “Escaped Vulnerabilities” (bugs found post-commit) to prove ROI. If your devs see security as a bottleneck, your process is broken—not their mindset. Is “shift-left” a blocker or an enabler in your org? Be honest. #DevSecOps #ShiftLeft #Cybersecurity #SoftwareDevelopment #Tech
-
🚨In the AI era, software moves at machine speed. So do supply chain attacks. The npm axios compromise, the enormously popular JavaScript http client with over 300 million weekly downloads, is a sharp reminder of what has changed. This was not typo-squatting. Not a fake package. Not a random dependency buried deep in the graph. This was compromise through a trusted path in the real software supply chain. That is the point leaders need to internalize. The problem is no longer just whether developers write secure code. It is whether the systems, packages, and automation they rely on can still be trusted when software is being assembled, shipped, and updated at machine speed. A short exposure window is all it takes. One compromised package. One CI run. One developer machine. One production workflow. That is enough. A few things every engineering and security leader should be driving right now: 1. Pin exact versions. Stop relying on loose defaults. 2. Enforce lockfiles and deterministic builds in CI/CD. 3. Block install scripts wherever they are not explicitly required. 4. Scan continuously for malicious and tampered dependencies, not just known vulnerabilities. 5. If you were exposed, assume compromise. Isolate, rebuild, and rotate secrets. Do not just patch and move on. Software supply chain security is no longer a developer hygiene issue. It is a leadership issue. It is operational resilience. It is trust. And increasingly, it is board level. The teams that get ahead here will not be the ones reacting fastest after the next incident. They will be the ones that built the controls before it happened. For security and engineering leaders: what is the single control you trust most right now against this class of attack? #SupplyChainSecurity #OpenSourceSecurity #DevSecOps #Cybersecurity #npm Snyk
-
GenAI is exploding inside companies, and so are accidental data leaks From my CrowdStrike years I learned a hard truth: most security failures start as UX failures, unclear choices, over-permissive defaults, silent errors. ↳ In 2025 I keep seeing the same pattern in enterprise rollouts: ➤ Teams are running around 66 GenAI apps per org, with a handful in the high-risk bucket. GenAI now drives a meaningful share of DLP incidents and it is rising fast. The gap is not the model, the gap is the interface. Your interface is a security control. Design it like one. ↳ A human-centered GenAI safety checklist ➤ Data boundaries in the flow, show exactly what fields go to the model, give a quick “exclude sensitive data” toggle ➤ Identity awareness, display “You are acting as: Role,” enforce least-privilege on agent tools, add a “review before run” gate for high-impact actions ➤ Provenance by default, label AI-written content, show source files and last tool run, make “why this” explainer one click away ➤ Safe defaults, workspace knowledge off by default, paste-clean for PII, auto-redact on copy and export ➤ Injection hygiene, scan prompts and tool outputs, block on detection with clear, teachable microcopy ➤ Sandboxes and rate limits, ship agents in “safe mode,” cap actions until confidence and telemetry are proven ➤ Auditability in UI, “view logs” link near any agent action, show who did what, when, and with which data ➤ Consent that travels, per-feature “exclude from model training,” persistent across chat, docs, and dashboards ➤ Error states that help, explain what failed, what was protected, and how to complete the task safely ↳ How I implement this with product teams: ➤ Map workflows and stakeholders, where does sensitive data actually move ➤ Audit readiness, roles, data classes, and risk moments in the UI ➤ Scan tools and vendors, approve a short list with clear policies ➤ Trial small experiments, measure incident rate and task completion time ➤ Embed into operations, instrument everything, upgrade defaults, train humans ➤ Repeat and scale, retire what does not earn trust Useful over shiny is still the rule. Great UX is not just delight, it is defense in depth. Read the report by Palo Alto Networks and share it with your network. Follow Rose B. for human-centered AI, practical UX research.
-
Two weeks ago, I wrote that attackers no longer need code review approval. They just need automation to run. This week’s follow-up is even more direct: a lot of the recent GitHub Actions attacks were not exotic zero-days. They were basic workflow hygiene failures — mutable action tags, unsafe use of untrusted inputs, and over-privileged tokens. Those are exactly the kinds of issues disciplined policy-as-code scanning should catch before a pipeline ever runs. This is why supply chain security has to start with policy-as-code discipline and hygiene in CI/CD. - Review workflow files like production code. - Pin third-party actions by SHA. - Default tokens to least privilege. - Treat PR metadata, comments, and other untrusted inputs as hostile. - And enforce these checks continuously, not occasionally. Attackers are not winning because of magical new zero-days. They are winning because basic CI/CD and software supply chain security hygiene is still inconsistent. Recent GitHub Actions attacks exploited workflow misconfigurations that should never make it to runtime: mutable tags, unsafe interpolation of untrusted input, and over-privileged tokens. Policy-as-code based guardrails can catch many of these issues early and turn fragile pipelines into governed ones. In 2026, supply chain security starts with workflow hygiene. If it runs in CI/CD, it needs guardrails. https://lnkd.in/gz2ktNBF #RSA #policyascode #softwaresupplychainsecurity
-
Heads up, developers and TPRM folks! A recent supply chain attack targeted the tj-actions/changed-files GitHub Action before March 14, 2025, leading to potential secret leaks from public repositories. Malicious code injected into CI workflows dumped CI runner memory, exposing workflow secrets in logs. While the immediate threat is mitigated, the risk of cached actions and already leaked secrets remains. What you need to do NOW: > Review if you are using tj-actions/changed-files > Rotate potentially exposed secrets > Replace the compromised action with a safer alternative > Reach out to your vendors and make sure that they have a mitigation plan in place Read more about this critical issue and how to protect your repositories in this blog post from Wiz: https://lnkd.in/gUEsGx5V #github #security #supplychain #devsecops #cybersecurity #tprm
-
The future of AppSec isn’t about chasing bugs or drowning in alerts. It’s about capturing intent, governing design, and enabling every contributor (human or AI) to build securely by default. AI-native development is evolving fast, and traditional AppSec can’t keep up unless we rethink it. We're moving toward a model built on composable, collaborative security layers throughout the SDLC. Here’s what that looks like: 1- Secure Design: Proactive security starts before code is written. 2- Secure-by-Default Components: Building in security at the component level, so “probabilistic” dev doesn’t derail us. 3- AI-Assisted Development with Guardrails: Using generative AI that writes code both faster and safer. 4- Automated Remediation: Fixing issues automatically or with minimal dev effort, from refactors to patch management. This article dives deeper into our thesis: https://lnkd.in/gZT3c6XS I had a lot of fun writing this article. It started as a series of conversations with friends, advisors and colleagues including C. Coolidge, Michael Coates, Andrew Peterson, Kelley Mak, Frank Wang, Jerry Hoff, and Hunter Korn. A special thanks to Rami McCarthy and James Berthoty for feedback on the drafts. Working with deep thinkers like them is what keeps this exciting. Curious how others are embedding secure guardrails into AI-first dev workflows. How are you making sure “secure by design” isn’t just a tagline?
-
🚨 Suspicious Tag Change in AWS’s GitHub Action: What Happened and Why It Matters On August 4, 2025, something unusual happened in the popular aws-actions/configure-aws-credentials repo (used by 225k+ projects): The v4.3.0 release tag was created… deleted… and re-created to point at a different commit – all in a few hours. 🔍 Why that’s a red flag Semantic version tags are normally not changed and should ideally be immutable. Changing one after release can signal a supply chain attack – attackers have done this before to insert malicious code into existing tags. 🛡 StepSecurity Artifact Monitor caught it instantly Our monitoring flagged the tag movement within minutes. Investigation showed it was not an attack – AWS had pulled a broken release and reissued it with a fix. But the pattern was identical to real compromises we’ve seen: · tj-actions/changed-files (Mar 2025) – Attackers gained write access to the repository of a widely used GitHub Action (tj-actions/changed-files). They then updated multiple version tags to reference a malicious commit. · reviewdog/action-setup (Mar 2025) – In a related attack, tags of the reviewdog/action-setup action were briefly pointed to a malicious commit containing a base64-encoded payload designed to steal secrets. 📌 The lesson Even harmless changes to existing semantic version tags look exactly like attacks. If you’re not monitoring tags, you might miss the compromised ones until it’s too late. ✅ How Artifact Monitor and Workflow Run Policies Protect Our Customers StepSecurity Artifact Monitor continuously watches popular GitHub Actions for suspicious tag changes — the kind that could indicate a supply chain compromise. Here’s what this means for our customers: · Early warning – Detects anomalous tag movements within minutes so incidents are investigated before they spread. · Automatic protection – If a tag change is confirmed malicious, the Action is immediately added to our Compromised Actions List. · Workflow blocking – Customers with the Compromised Actions Workflow Run Policy enabled have those Actions automatically blocked in their CI/CD pipelines, acting like a 24/7 on-call engineer safeguarding every workflow from malicious code execution. 💡 This isn’t the only suspicious tag change we’ve caught. In our blog post (link in comments), we share more examples and explain exactly how StepSecurity Artifact Monitor and Workflow Run Policies work to protect your software supply chain.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development