Open Source Software Trends

Explore top LinkedIn content from expert professionals.

  • View profile for Swami Sivasubramanian
    Swami Sivasubramanian Swami Sivasubramanian is an Influencer

    VP, AWS Agentic AI

    189,954 followers

    We’ve seen customers experience this pattern: teams ask an AI agent to fix a bug, and the agent refactors three helper functions, adds defensive null checks everywhere, and rewrites code that worked fine. The core problem is that devs and the agent aren't working with the same boundary between what to fix and what to leave alone. We built Kiro's bug-fixing workflow around something we call property-aware code evolution. Every bug fix has dual intent: fix the buggy behavior surgically, preserve everything else. But how does this work in practice? How does Kiro know which is which? Kiro first proposes a bug condition—the scenarios it believes trigger the bug—and the postcondition—what should happen instead if we didn’t have a bug. Based on this, Kiro creates two testable properties: the fix property, which checks if the fixed code works correctly on buggy inputs and the preservation property, which ensures behavior is preserved everywhere else. You can iterate with Kiro over both properties until you’re comfortable with the agent’s hypothesis. Once that’s in place, Kiro first tests both properties against the unfixed code. Fix-property tests should fail, reproducing the bug exactly where predicted. Preservation tests should pass, capturing baseline behavior for the non-buggy scenarios. After gathering these results on the unfixed code, Kiro applies a fix and retests both properties. If the fix worked, both kinds of property tests should now pass, letting us know that we fixed the bug without changing anything else. Because all this is backed by property-based tests, Kiro generates and tests hundreds of variations that cover many edge cases to narrow down the problem and test the fix comprehensively. This approach gives teams the confidence to let Kiro work more autonomously without sacrificing understanding of what it’s doing to solve the problem. Our team dives into property-aware code evolution in this blog. Learn how to use agents to fix complex bugs more reliably with Kiro ➡️ https://lnkd.in/gWZkBcVX

  • Open source is running into a new challenge. AI can now generate code faster than maintainers can review it. I call this Slopen Source. Coding agents make it easy to produce pull requests, but they do not bear responsibility for maintaining the codebase, understanding architectural decisions, or protecting long-term quality. The burden shifts to maintainers, who must verify and support code that contributors may not fully understand. In this article, I explore how this shift is affecting contribution dynamics, maintainer burnout, and how some ecosystems and platforms are starting to respond with new norms, contributor verification, stronger testing expectations, and tools designed to reduce automated pull request noise. What are you seeing in open source to improve this?

  • View profile for Saanya Ojha
    Saanya Ojha Saanya Ojha is an Influencer

    Partner at Bain Capital Ventures

    80,178 followers

    For the past two years, the AI-in-code narrative has been about creation: auto-complete, copilots, and agents that promise to ship apps in minutes. For the first time, the story is expanding to include repair. This week, Google DeepMind launched CodeMender, an autonomous AI agent that hunts down vulnerabilities, drafts patches, tests them, critiques itself, and submits fixes to open-source repos. In its early phase, CodeMender has upstreamed 72 security fixes - some in codebases spanning millions of lines, the kind of work that would take human teams months. In other words: we’re teaching machines not just to write, but to atone. Historically - by which I mean, like, last year - cybersecurity was a human sport: a contest of builders and breakers, patchers and penetrators. Now, both sides are automating. - Attackers fine-tune LLMs to find zero-days, turning them into exploit copilots. - Defenders deploy repair agents to find and fix them. The result is an arms race between autonomous systems, unfolding at speeds far beyond human review cycles. Imagine the future: bugs and fixes flying past each other in the night, too fast for any human to follow. Security as algorithmic speed chess. And that sets up the deeper question CodeMender raises: What happens when software starts fixing itself? If an AI can autonomously detect and patch vulnerabilities, we edge toward self-healing infrastructure. But autonomy introduces new fragilities: ▪️ Adversarial corruption. An attacker could poison the model’s feedback loop, tricking its “critique agents” into approving malicious code. The line between “defender” and “attack surface” is one bad update away. ▪️Human deskilling: Overreliance breeds amnesia: “It’s fine, CodeMender will fix it” is a dangerous cultural default. ▪️Accountability black holes: If an AI-generated patch breaks production or causes a breach, who holds the bag - the developer, the model, or Google? Your Chief Risk Officer wants to know. And yet, doing nothing isn’t safer. We are already drowning in insecure code - much of it written by humans on deadlines and LLMs on vibes. The attack surface has outgrown human capacity to defend it. CodeMender represents more than automated patching. It’s a prototype for reflexive software - systems that monitor and adapt their own health. It works 2 ways: → Reactively, patching known vulnerabilities before they’re exploited. → Proactively, refactoring brittle code to eliminate entire classes of vulnerabilities before they occur. That’s not just “AI for cybersecurity.” That’s AI as immune system - a distributed intelligence layer quietly testing, healing, and hardening the world’s codebase. Autonomy in generation led us to creation at scale. Autonomy in repair might just lead us to resilience at scale. In an age where more software is written by models than by people, self-healing becomes survival - the only way to keep the lights on in a digital world built faster than it can be understood.

  • View profile for Shivam Gupta

    Helping founders win with AI, social media marketing, and personal branding | Favikon Top 30 Creator in India | Trusted by 800+ brands

    62,663 followers

    We ran a retrospective specifically on our PR review process. Asked the team one question: "What part of code review do you find least valuable?" Every single person said some version of the same thing: chasing down AI findings that turned out to be nothing. Or worse - spending an hour proving a real bug was real, only to fix it in 10 minutes. The time ratio was backwards. More time proving than fixing. We mapped it out. The pattern was consistent: AI flags issue → developer cannot reproduce → developer deprioritizes → finding sits → sometimes a bug ships. Nobody on the team was being lazy. The incentives were just wrong. If verification is expensive and uncertain, rational developers save it for when they have slack. Which is never. CodeAnt AI just launched Steps of Reproduction inside PRs. The finding comes with the trigger conditions, the execution path, the proof. Verification goes from "30-minute investigation" to "2-minute confirmation." The retrospective basically wrote the product roadmap for them. Every complaint pointed to the same gap. They closed it.

  • View profile for Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 is an Influencer

    Co-founder & CEO at Hugging Face

    302,459 followers

    Important report "Stopping Big Tech from becoming Big AI" "Open source AI has an important role to play in countering a lack of interoperability and access, and fostering innovation, by lowering barriers to entry, particularly for smaller and less well-resourced actors. Building on open source platforms, developers can create customized AI models and applications without having to make massive investments in computing power, data and other inputs. Open source also supports critical public interest research on the safety and trustworthiness of AI – for example, ensuring that researchers have access to foundation models or their training data, in order to carry out assessments of harmful biases." https://lnkd.in/emzD6rUy

  • View profile for Diksha Dutta

    Head of Growth | Podcast Host | Published Author

    12,015 followers

    I’ve been reflecting on my conversation with Nader Dabit currently building developer communities at Eigen Labs, and formerly with Amazon Web Services (AWS) and The Graph. What struck me most was how many of his insights we have been actively applying while building the developer + founder community at soonami.io GmbH. Here are the top takeaways I’ve been leaning on 👇 1/ Building a developer community is a marathon, not a sprint. Developers want to go where there’s traction, but traction doesn’t happen overnight. It takes time, trust, and a lot of value creation. 2/ Transparency builds trust. Be open about the trade-offs of your platform. No tech is perfect. Developers appreciate honesty over hype. If they know what they’re working with, they can make informed decisions. 3/ Help developers whether they use your product or not. The best DevRel teams provide value beyond their own ecosystem. Answer questions, share knowledge, and be part of the broader developer journey. This goodwill always comes back. 4/ Meet developers where they are. Not every developer is hanging out on Twitter. Find them in Discord, Telegram, GitHub, hackathons, or niche forums. Engage where they feel comfortable, not where it's easiest for you. 5/ Hackathons: Not just about numbers, but long-term impact. Instead of attracting bounty hunters who leave after a quick win, structure your hackathons to support serious builders. Offer milestone-based funding, mentorship, and ecosystem support. 6/ Long-term DevRel isn’t about short-term metrics. It's not just about tracking engagement. It’s about relationship-building over months (or years). DevRel should create a ripple effect—one great project inspires others. 7/ Cross-functional collaboration is key. Building a developer community isn’t just a DevRel task. Marketing, engineering, and leadership must align to provide the best support for developers. 8/ One strong builder > 100 inactive users. It’s not about quantity. Even if just one project from your hackathon or community scales, it can change the entire ecosystem. 9/ Want to break into DevRel? Here’s Nader’s advice: 🔹 Deeply understand the product 🔹 Build relationships with internal teams 🔹 Focus on providing genuine value 10/ Final takeaway: Developer communities thrive on authenticity, support, and long-term thinking. It’s not about pushing a product, it’s about empowering people to build. What’s your biggest takeaway from this? Let’s discuss! 

  • View profile for Brian Douglas

    Co-Founder | CEO at the Paper Compute Company

    6,893 followers

    In the age of AI-native development, code review is growing to be the most important part of generative AI workflows. As models improve, the workload we hand off to agents is increasing, but this trend has us losing valuable context in our codebase. The real power of AI code review isn't just catching bugs faster and preserving institutional knowledge that would otherwise vanish. When an agent generates code, the review process becomes your team's opportunity to understand why decisions were made, what patterns are emerging, and how different parts of your system connect. Without this review step, you're essentially accepting black-box contributions. My approach to AI code review treats these interactions as knowledge transfer opportunities. Whether the code comes from your teammates or from agents, the review workflow ensures context flows both ways: the AI learns your standards, and your team learns what the AI is building. This is especially critical as we move toward more autonomous coding agents that can handle entire features. The shift from "AI writes code faster" to "AI helps us review and understand code at scale" might be the most important evolution in developer tooling right now. #AICodeReview #DeveloperExperience #OpenSource

  • View profile for Rock Lambros
    Rock Lambros Rock Lambros is an Influencer

    Securing Agentic AI @ Zenity | RockCyber | Cybersecurity | Board, CxO, Startup, PE & VC Advisor | CISO | CAIO | QTE | AIGP | Author | OWASP AI Exchange, GenAI & Agentic AI | Security Tinkerer | Tiki Tribe

    21,397 followers

    Your AI agent just pushed 47 security patches. How many did you actually review? Google DeepMind launched CodeMender last month. OpenAI followed with Aardvark. Both promise to identify and fix vulnerabilities autonomously. There are key architectural differences between the two. CodeMender combines static analysis, fuzzing, SMT solvers, and LLM reasoning. It validates fixes through differential testing before any human sees them. DeepMind reports 72 accepted patches across open-source projects. Aardvark takes a different path. It's LLM-first. The agent threat-models your repo, scans commits, validates exploitability in a sandbox, then generates patches. OpenAI claims 92% recall on test repos and 10 disclosed CVEs. Both sound great until you think about what they're actually doing. These agents write code probabilistically. They generate fixes based on learned patterns, not deterministic logic. You get speed. You get coverage. But you also get vibe coding at scale. Anyone who's ever vibe-coded knows that new bugs often emerge, or previously fixed bugs often magically reappear when you use AI to fix errors in the code. And they aren't always obvious. It's subtle logic errors that pass your CI because the agent wrote tests that match its own flawed assumptions. It's the gap between "this looks right" and "this is provably right." Program analysis can verify properties. Fuzzing can stress edge cases. But an LLM? It's guessing with high confidence. CodeMender layers validation on top of generation. That's better. But both tools still rely on probabilistic code synthesis, and both require human review as the last line of defense. Humans can't keep pace with autonomous agents. Not at scale. You want deterministic verification for code that patches security vulnerabilities. Anything less adds more security debt to the pile. The question isn't whether these tools are useful. They are. The question is whether your organization has the testing rigor to catch what they miss. Do you trust probabilistic code generation to patch your production vulnerabilities? 👉 Follow for more AI and cybersecurity insights with the occasional rant #AIgovernance #cybersecurity #AppSec #VibeCoding

Explore categories