DevOps Days Buenos Aires always shows what teams are actually dealing with in production. Less theory, more “this broke, here’s what we learned.” Our team was on the ground last week, talking to engineers, hearing real GitOps pain, and seeing how people are handling scale, visibility, and tool sprawl in practice. AI came up in a lot of talks. Not as hype, but as an operational challenge. How do you run it on top of existing infra without adding even more complexity? That’s exactly the problem space we’re building in with Kunobi Visual GitOps without breaking how teams already work Adding a short clip below to give a feel for the atmosphere 👇 #DevOpsDays #Kubernetes #GitOps #PlatformEngineering #DevOps #CloudNative #DevOpsDaysBuenosAires #AI #AIInfrastructure #MLOps
More Relevant Posts
-
Why Governance Determines Whether Agentic AI Accelerates or Stalls Engineering : AI coding tools offer rapid productivity gains, but "governance debt" often slows delivery. Learn how to embed risk-based controls and auditability into agentic AI workflows to scale engineering capacity. Read more: https://lnkd.in/gnFSPp3B 📚 Expand your DevOps knowledge! Join our community for continuous learning and skill development.
To view or add a comment, sign in
-
AI agents can take action. That changes everything - and most teams aren't ready for it. That's why we started Humans in the Loop - a video series for engineers and DevOps teams figuring out the agentic AI era without losing control of their systems. The first episode is out. Andrey Devyatkin and Fernando Gonçalves set the stage: what agentic AI actually means, why context is everything in infrastructure troubleshooting, and what tools like Cursor, MCP, Claude Code, and Amazon Q CLI mean for DevOps engineers today. https://lnkd.in/eiSbrP7Y
Agentic AI in DevOps Explained: Tools, Context, and What Changes Next
https://www.youtube.com/
To view or add a comment, sign in
-
We are making a series of videos explaining concepts critical to understand when it comes to agentic AI and its application in DevOps. If you are starting up with magnetic AI those episodes are for you. We will transition to more advanced topics as we finish establishing a shared glossary and a common understanding. Stay tuned for more!
AI agents can take action. That changes everything - and most teams aren't ready for it. That's why we started Humans in the Loop - a video series for engineers and DevOps teams figuring out the agentic AI era without losing control of their systems. The first episode is out. Andrey Devyatkin and Fernando Gonçalves set the stage: what agentic AI actually means, why context is everything in infrastructure troubleshooting, and what tools like Cursor, MCP, Claude Code, and Amazon Q CLI mean for DevOps engineers today. https://lnkd.in/eiSbrP7Y
Agentic AI in DevOps Explained: Tools, Context, and What Changes Next
https://www.youtube.com/
To view or add a comment, sign in
-
AI is showing up everywhere in DevOps—but rarely as a connected system. Code, testing, deployment, and ops are evolving with AI, but the real challenge is how they come together in production. On April 25, Harness and The AI Collective are bringing together practitioners shipping AI across the lifecycle and navigating real production complexity. Register to hear what actually works when AI moves into production: https://lnkd.in/gZZsJvKt
To view or add a comment, sign in
-
-
AI is showing up everywhere in DevOps—but rarely as a connected system. Code, testing, deployment, and ops are evolving with AI, but the real challenge is how they come together in production. On April 25, Harness and The AI Collective are bringing together practitioners shipping AI across the lifecycle and navigating real production complexity. Register to hear what actually works when AI moves into production: https://lnkd.in/gaYwbhJJ
To view or add a comment, sign in
-
-
AI is showing up everywhere in DevOps—but rarely as a connected system. Code, testing, deployment, and ops are evolving with AI, but the real challenge is how they come together in production. On April 25, Harness and The AI Collective are bringing together practitioners shipping AI across the lifecycle and navigating real production complexity. Register to hear what actually works when AI moves into production: https://lnkd.in/gNAhE-3S
To view or add a comment, sign in
-
-
AI coding tools are undeniably speeding up development, but the rest of the delivery process is lagging behind. The 2026 State of DevOps Modernization Report from Harness compiles insights from 700 engineers, revealing that while AI is solving many challenges, it's also introducing new ones. 📄 Dive into the full report to explore AI's impact on DevOps: https://lnkd.in/eYu9PZNx
To view or add a comment, sign in
-
AI won't replace your CI/CD pipeline. But it will act as your smartest gatekeeper. 🤖 Integrating AI into DevOps isn't just a buzzword; it's solving real, everyday engineering bottlenecks: ⚡ Predictive Test Selection: Instead of waiting an hour for a massive test suite to run, machine learning models analyze the specific lines of code you changed and only run the relevant tests. Hours drop to minutes. 🧠 Intelligent Code Reviews: The potential here is massive. Instead of just checking syntax, AI can review a Pull Request for context, logic flaws, and output a plain-English summary of the impact before a human ever looks at it. 🛡️ Automated Canary Analysis: Tying your observability stack (like Prometheus) into an AI analyzer means deployments can automatically roll back the second a metric spikes, long before an engineer gets paged. For the aspiring DevOps engineers my advice is this: Your declarative Git repo and your pipeline are still the unshakeable source of truth. AI is simply your high-speed navigator. Are any of your teams actively using AI tools inside your CI/CD workflows yet (like predictive testing or log summarization)? Let's hear what's actually working in the wild! 👇 #DevOps #CICD #ArtificialIntelligence #CloudEngineering
To view or add a comment, sign in
-
-
AI coding agents are going to double your commit volume by the end of the year. But they can’t give you a trustworthy answer the one question that actually matters: “Are we good to ship?” Today, that answer still means jumping across CI tools, scanners, tickets, dashboards, and Slack threads to piece together a best guess. That breaks down fast when AI starts doubling your commit volume. We built the DevOps Agent Kit to change that. It’s an open-source (Apache 2.0) starter kit that connects your AI agent to the full context of your DevOps stack: CI runs, security findings, release workflows, feature flags, so you get a real, verifiable answer, not a hallucination. If AI is going to double your output, your DevOps needs to keep up. Start here ➡️ https://lnkd.in/et5_bh3t
To view or add a comment, sign in
-
-
I have spent the better part of twenty years in marketing, watching every major platform shift go through a similar cycle. Mobile. Cloud. Containers. SaaS. The thing we are all living through now with AI. The pattern repeats every single time, and it is almost embarrassing how predictable it is. The teams that move early, who connect their tools to the new substrate before everyone else, end up with a compounding advantage that the late majority never catches up to. The teams that wait for the dust to settle find out the dust never settles, it just moves somewhere else. Agents are becoming the new interface for software delivery. That is no longer a prediction, that is a deployment decision you are either making or avoiding. The real question is whether your agent can see your pipelines, or whether it is still writing code in a vacuum while your team runs the same forty-minute scavenger hunt every single Friday afternoon. The DevOps Agent Kit gives you the scaffolding to stop running that scavenger hunt. Clone it. Connect it to the tools you already have like Anthropic Claude or Cursor. Make it yours. Then tell us what you built, because we genuinely want to see the skills your team comes up with once they start treating the whole pipeline as a first-class input to the agent. We are at the beginning of something big here!
AI coding agents are going to double your commit volume by the end of the year. But they can’t give you a trustworthy answer the one question that actually matters: “Are we good to ship?” Today, that answer still means jumping across CI tools, scanners, tickets, dashboards, and Slack threads to piece together a best guess. That breaks down fast when AI starts doubling your commit volume. We built the DevOps Agent Kit to change that. It’s an open-source (Apache 2.0) starter kit that connects your AI agent to the full context of your DevOps stack: CI runs, security findings, release workflows, feature flags, so you get a real, verifiable answer, not a hallucination. If AI is going to double your output, your DevOps needs to keep up. Start here ➡️ https://lnkd.in/et5_bh3t
To view or add a comment, sign in
-
Explore related topics
- AI in DevOps Implementation
- Kubernetes Challenges for Operations Teams
- Managing Challenges in AI Product Development
- How to Scale AI Beyond Pilot Projects
- Machine Learning Deployment Approaches
- DevOps Principles and Practices
- How to Build Reliable LLM Systems for Production
- DevOps for Cloud Applications
- Kubernetes Lab Scaling and Redundancy Strategies
- Scaling DevOps Operations
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development