GitHub's Copilot CLI just got smarter — and the logic behind it is worth understanding. A new experimental feature called Rubber Duck adds a second AI model from a different model family to review your coding agent's work at key checkpoints: after planning, after complex implementations, and after writing tests. The idea? A model from a different AI family catches blind spots that the primary model — trained differently — might consistently miss. Early results on SWE-Bench Pro show Claude Sonnet 4.6 + Rubber Duck closing 74.7% of the performance gap between Sonnet and Opus. And it costs less than running Opus solo. The bigger takeaway: the question for development teams may no longer be "which model is best?" It may be "which two models work best together?" Worth a look if your team is evaluating AI tooling for complex, multi-file development work. https://lnkd.in/giSrfXjj #GitHub #GitHubCopilot #DevOps #CodingAgents #AITools #SoftwareDevelopment #DeveloperProductivity
GitHub Copilot CLI gets smarter with Rubber Duck AI model
More Relevant Posts
-
[New Blog Post] The Real Value of GitHub Copilot Rubber Duck The next step for AI coding is not more generation. It is better judgement. That is why GitHub Copilot Rubber Duck is interesting. It is not just more AI in the workflow. It is a second opinion that helps challenge the plan, implementation, or tests… That is where this gets interesting. Read more here: https://lnkd.in/eq2v3x7f #GitHubCopilot #GitHub #AIEngineering #PlatformEngineering #DeveloperExperience #DevOps #SoftwareEngineering
To view or add a comment, sign in
-
GitHub CLI Telemetry Defaults Impact Developer Tools and Open-Source Governance DevOps Insight Apr 15–22, 2026: GitHub CLI telemetry defaults, Copilot sign-up pause, Grafana’s free AI assistant, and Ruby Central turmoil. 📅 Coverage period: Apr 17 - Apr 23, 2026 Read the full analysis 👇 #TechNews #TechnologyTrends #DeveloperToolsAndSoftwareEngineering #DevOps #SoftwareDevelopment #Programming https://lnkd.in/g6bJt2sn
To view or add a comment, sign in
-
𝐃𝐚𝐲 2: 𝐓𝐡𝐞 𝐒𝐞𝐜𝐫𝐞𝐭 𝐁𝐞𝐡𝐢𝐧𝐝 𝐃𝐨𝐜𝐤𝐞𝐫. 𝐈𝐭 𝐢𝐬 𝐀𝐥𝐥 𝐀𝐛𝐨𝐮𝐭 𝐈𝐦𝐚𝐠𝐞𝐬 𝒀𝒆𝒔𝒕𝒆𝒓𝒅𝒂𝒚, we ran our first container. But today, a bigger question comes up: 𝐖𝐡𝐞𝐫𝐞 𝐝𝐨 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐜𝐨𝐦𝐞 𝐟𝐫𝐨𝐦? 𝐓𝐡𝐞 𝐚𝐧𝐬𝐰𝐞𝐫: Docker Images. And honestly, this is where Docker really starts to make sense. In Day 2 of #20DaysOfDocker, we break down the concept that powers everything in Docker. No fluff. Just clarity. 𝐖𝐡𝐚𝐭 𝐲𝐨𝐮 𝐰𝐢𝐥𝐥 𝐥𝐞𝐚𝐫𝐧: Why Docker images are read-only blueprints How images are built using layers (this is a game-changer) How versioning works (and why tags matter more than you think) Where images live (Docker Hub & registries) 𝐓𝐡𝐞 “𝐚𝐡𝐚” 𝐦𝐨𝐦𝐞𝐧𝐭: Every image is made of layers. Each layer = a small change. Each change = cached, reusable, efficient. That’s why Docker is fast. That’s why it scales. 1.) 𝐇𝐚𝐧𝐝𝐬-𝐨𝐧 (𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞𝐨𝐫𝐲 𝐢𝐬𝐧’𝐭 𝐞𝐧𝐨𝐮𝐠𝐡): Pull real images (ubuntu, nginx, python) Explore sizes and layers Remove images and clean your system Set up your Docker Hub account 2.) 𝐐𝐮𝐢𝐜𝐤 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐲𝐨𝐮 𝐝𝐨𝐧’𝐭 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐦𝐢𝐬𝐬: Images are immutable (they never change) Containers add a writable layer on top Every image has a unique SHA256 ID Everything is optimized for speed and reuse 3.) 𝐁𝐲 𝐭𝐡𝐞 𝐞𝐧𝐝 𝐨𝐟 𝐃𝐚𝐲 2, 𝐲𝐨𝐮’𝐥𝐥 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: What Docker images really are How layers work behind the scenes How to pull, inspect, and manage images How registries and repositories fit together How to choose the right images (like a pro) If 𝐃𝐚𝐲 1 𝐰𝐚𝐬 “𝐫𝐮𝐧 𝐚 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫”… 𝐃𝐚𝐲 2 𝐢𝐬 “𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐰𝐡𝐚𝐭’𝐬 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐡𝐚𝐩𝐩𝐞𝐧𝐢𝐧𝐠.” And that's when beginners become real Docker users. 𝐒𝐭𝐚𝐫𝐭 𝐃𝐚𝐲 2 𝐡𝐞𝐫𝐞: https://lnkd.in/dtVn3ieP 𝐋𝐞𝐭’𝐬 𝐤𝐞𝐞𝐩 𝐛𝐮𝐢𝐥𝐝𝐢𝐧𝐠. 𝐎𝐧𝐞 𝐥𝐚𝐲𝐞𝐫 𝐚𝐭 𝐚 𝐭𝐢𝐦𝐞. 🐳 𝑫𝒐 𝒏𝒐𝒕 𝒇𝒐𝒓𝒈𝒆𝒕 𝒕𝒐 𝒔𝒕𝒂𝒓𝒕 𝒕𝒉𝒆 𝒓𝒆𝒑𝒐. #Docker #DevOps #LearningInPublic #OpenSource #BackendDevelopment #CloudComputing #TechCommunity
To view or add a comment, sign in
-
-
We’ve trusted Git for everything — clean versioning, easy collaboration, and quick rollbacks. But when I started building real ML projects, I realized Git alone wasn’t enough. Git works great for software development, but in ML, data broke everything. Massive datasets, model weights, constantly changing labels, and scattered experiments made versioning a nightmare. Git LFS was expensive, S3 buckets felt disconnected, and reproducibility became painful. That’s when I discovered DagsHub — GitHub for Data Science. It neatly combines Git + DVC + MLflow in one platform. I finally got: - Reliable versioning for large datasets (no more LFS headaches) - Built-in experiment tracking - Free remote storage + model registry I tested it on a project containing audio, images, and tabular data. I ended up tracking 3GB+ of data while keeping my Git repository under 50KB. Clean, reproducible, and actually enjoyable. Want the full story — setup steps, DVC commands, MLflow integration, and key learnings? 👉 Read the complete post here: https://lnkd.in/gdM-ERPk #MLOps #AIOps #DevOps #MachineLearning #ProductionAI #AI
To view or add a comment, sign in
-
GitHub's coding agent doesn't write code for you. It exposes whether your workflow deserves automation. Is your repository clean enough for background execution? Can your team define tasks precisely enough for an agent to act on them without constant correction? Most engineering teams answer "yes" instinctively. The agent will answer honestly. The real friction isn't adoption. It's that GitHub's own documentation lists explicit constraints: one pull request per task, repository-scoped execution, vulnerability to prompt injection, blockable by repository rules. That is not a limitation to work around. It is a mirror held up to your current process quality. The Invisible Tax pattern shows up here. Teams treat AI tooling as a patch for unclear ownership and weak review discipline. Because the agent inherits whatever mess exists in the repo, output quality degrades fast, and blame lands on the tool rather than the workflow. I've watched engineering leaders approve AI tooling budgets before auditing whether their task definitions are specific enough for a human to execute without a follow-up meeting, let alone an agent. - Repository hygiene determines agent reliability before any prompt is written - Review discipline must exist before background execution adds volume - Access controls and security considerations are non-negotiable, not post-launch tasks - AI accelerates a good workflow; it compounds a broken one The threshold most teams skip: what task-clarity standard must exist before agent-assisted work produces net positive output? That number varies, and few teams have defined it. The missing piece is ownership. Who is accountable when an agent-opened pull request introduces a regression nobody caught? A clean workflow beats a clever tool. Process quality trumps tooling ambition. Let's audit one repository your team would assign to an agent first, and assess honestly whether the task boundaries and review gates are ready for it. #AIStrategy #SoftwareEngineering #ProductLeadership by Dr. Hernani Costa, CEO & Founder of First AI Movers part of Core Ventures
To view or add a comment, sign in
-
One developer just replaced a $15k/mo dev team with a free GitHub repo. Here's why founders should care. Everything Claude Code just crossed 160k GitHub stars. Fastest growing dev tool repo in history. The creator won an Anthropic hackathon by building a full product solo in 8 hours. Then he open-sourced everything: → 38 specialized AI agents → 156 skills → 72 commands And a system that learns your coding patterns over time. Most founders won't read past that because it sounds like a developer story. It's not. This is a cost structure story. The average startup pays $8-15k/month for 3-4 developers. One person with this setup reports shipping at the same speed for $20/month in API costs. Even if the real number is half that, the math changes how you think about building a product. Here's what actually matters for non-technical founders: It learns your patterns. Normal AI coding tools start from scratch every session. This one remembers how your codebase works, what conventions your team follows, and gets better the longer you use it. After 2-3 weeks it writes code in your team's style automatically. It has a built-in security scanner. 1,282 tests that catch leaked API keys, misconfigurations, and vulnerabilities before they become expensive problems. One command. Most founders have no idea their AI coding setup is a security risk. It works across tools: → Claude Code → Cursor → Codex One config that works everywhere. Your team doesn't have to pick one tool and commit. Why this matters even if you're not building software: The cost of building AI-powered workflows just dropped again. If you've been thinking about building an internal tool, an enrichment pipeline, a custom agent, or any AI workflow for your business, the barrier to entry keeps getting lower. Every month the gap widens between founders who understand what AI tooling can do now and founders who are still hiring the way they did in 2024. This repo isn't the point. The trend is. The cost of building just collapsed again. And it's not coming back up.
To view or add a comment, sign in
-
-
Wild to see AI demand pushing others to pause sign-ups—meanwhile GitLab keeps teams shipping with built-in SDLC agents, choice of models, and $24/month in promo AI credits per Ultimate user so you don’t have to hit the brakes on innovation.
Dev teams: I noticed other providers are hitting walls, GitLab is happy to serve you. Start free, receive $24/month in free promo AI credits with every Ultimate user, enjoy CLI/IDE/UX access to built-in full SDLC agents, Claude and Codex, and your own custom agents, and use your choice of models and model providers so you never get stuck without service. https://lnkd.in/ejKa7dGV
To view or add a comment, sign in
-
##GitHub just changed the way we build software, and most people are still sleeping on it. There’s a new tab quietly rolling out in repositories: 👉 “Agents” And it signals something big. We’re no longer just using AI to write code. We’re starting to delegate engineering work to it. Let that sink in. Not: “write this function” But: “fix this bug, run tests, and open a PR” And an AI agent actually goes and does it. In the background. End-to-end. The new GitHub Copilot Agent workflow is shifting development from: 🧠 Human writes code 🤖 AI assists to: 🧠 Human defines task 🤖 AI executes task The “Agents” tab is basically: • A control panel for AI workers inside your repo • A place to track what your AI is doing • A dashboard for autonomous dev workflows • A bridge between ideas and working pull requests And here’s the uncomfortable truth: We are moving from coding as execution to coding as direction. The best engineers in this new world won’t be the ones who type the fastest. They’ll be the ones who can: • break problems down clearly • define instructions precisely • manage AI systems like teammates Software engineering is quietly shifting into something new: 👉 from writing code 👉 to orchestrating agents that write it for you And GitHub didn’t announce it loudly. They just shipped it. We’re not in “AI-assisted development” anymore. We’re entering AI-operated development. The question is no longer: “Can you code?” It’s: “Can you lead machines that code?” Because the future repo won’t just have developers. It will have agents working alongside them. #AI #Agent #Devs #github #git #tech #learners #update #trending #programming #backend #frontend #mobile #machinelearning #ml
To view or add a comment, sign in
-
-
🐳 Not just “docker run” - I built a production-ready container. Most people learn Docker by running containers. I learned it by designing one from scratch -with constraints that actually matter in real-world systems. 🔧 What I Built As part of a DevOps assignment, I containerized a Flask application with a focus on best practices and real-world readiness: • Used a lightweight base image (python:3.11-slim) for efficiency • Ensured the container runs as a non-root user for security • Structured layers carefully to optimize build caching • Added a .dockerignore to reduce unnecessary context • Exposed the application correctly on port 5000 • Designed it to integrate seamlessly with multi-service environments All of this wasn’t just implementation -it was intentional engineering. ⚙️ Beyond a Single Container I extended this into a multi-container setup using Docker Compose: • Flask app + Redis service • Health checks to ensure service readiness • depends_on with proper startup sequencing • Named volumes for persistence Because in real systems, containers don’t live alone -they collaborate. 🧠 What Changed in My Thinking Docker stopped being a tool… and started feeling like a system design problem in disguise. Every decision had a trade-off: • Smaller image vs build complexity • Security vs convenience • Layer caching vs readability • Stateless containers vs persistent data Even something as simple as combining RUN commands impacts image size and efficiency -concepts that directly affect production systems. 💡 Biggest Takeaway A good Dockerfile is not about making the app run. It’s about making it run securely, efficiently, and predictably anywhere. If you're learning DevOps, don’t stop at tutorials. Build something that forces you to think about why things are done a certain way. That’s where the real learning begins. #Docker #DevOps #Containerization #CI_CD #BackendDevelopment #CloudComputing #SystemDesign #Flask #Python #LearningInPublic
To view or add a comment, sign in
-
GitHub Copilot Pulls Drawstring On Tighter Developer Usage Limits: GitHub Copilot, the AI-powered code completion tool, is undergoing changes as it tightens its usage limits for developers. Due to the surge in its popularity among software engineers, GitHub has implemented stricter controls to ensure the tool is used effectively and judiciously. This move acknowledges the vast potential of AI in enhancing coding efficiency while balancing the need for responsible usage. The adjustments to Copilot are designed to foster a more sustainable development environment. By limiting the extent of its code generation capabilities, GitHub aims to encourage developers to engage more deeply with their coding processes rather than relying solely on automated suggestions. This strategic pivot could lead to an overall improvement in software quality and maintainability as developers become more hands-on in their approach. Furthermore, GitHub’s decision reflects a broader trend in the DevOps community where reliance on automation tools is continually being assessed. As organizations seek enhanced productivity, balancing automation with active developer engagement is becoming crucial. Issues such as code authenticity and ownership are raised, prompting discussions about how generative AI tools should fit into the software development lifecycle. As the industry evolves, the implications of these changes will be closely watched. Developers and organizations alike must navigate the fine line between leveraging AI-driven tools and maintaining the human element in coding practices. GitHub's new strategy aims not just at refining Copilot’s use but also at shaping the future landscape of coding in the DevOps arena. Read more: https://lnkd.in/gS4FjVB5 ⚡ Supercharge your DevOps expertise! Join our community for cutting-edge discussions and insights.
To view or add a comment, sign in
More from this author
Explore related topics
- AI Tools for Code Completion
- How to Boost Productivity With Developer Agents
- How AI Agents Are Changing Software Development
- AI Coding Tools and Their Impact on Developers
- How to Boost Productivity With AI Coding Assistants
- How to Use AI Agents to Optimize Code
- Impact of Github Copilot on Project Delivery
- How AI can Improve Coding Tasks
- How to Manage AI Coding Tools as Team Members
- How to Use AI for Manual Coding Tasks
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development