Folks, is DevOps fit for purpose in an AI world? Spoiler, yes but with a caveat - if we move from pipelines to systems of intent. Martin Fowler, Patrick Debois, Gene Kim et al gave us the cultural foundation; AI now lets DevOps anticipate, self‑optimize and translate business goals into safe, executable plans. Read about the future of DevOps in full the full article here: 👉 https://lnkd.in/eA7bwHGB All based on themes from my new book: A Brief History of Engineering. #DevOps #AI #SystemsOfIntent #SRE #EngineeringLeadership OTTRA Limited
More Relevant Posts
-
MLOps best practices for scaling AI products Did you know that up to 85% of AI projects never make it to production? The culprit isn't a lack of innovative models, but rather the difficulty in scaling and maintaining them. That's where MLOps comes in – your key to unlocking AI's true potential. MLOps isn't just DevOps for machine learning; it's a cultural and engineering shift focused on automating and monitoring the entire AI lifecycle. Think streamlined deployments, continuous integration/continuous delivery
To view or add a comment, sign in
-
-
In a world of AI hype, the highest ROI often comes from the unsexy fundamentals. Boards want to talk about AI agents. Very few understand the importance of Everything as Code (EaC) or CI/CD Pipeline consistency. But here is the reality: AI is a force multiplier. If your DevOps practices are inefficient, AI will simply multiply that inefficiency. ROI is found in: – Knowing exactly how and why a change was made. – Adding or refining steps in your pipeline without breaking the system. – Ensuring modifications are only done with permission. Learn why the foundation matters more than the hype: https://lnkd.in/gZrneKd8 #SoftwareStrategy #DevOps #EaC #DigitalTransformation #TrilityConsulting #ROI
To view or add a comment, sign in
-
-
The Three Ways of DevOps weren't written for AI. But they might as well have been. The Phoenix Project came out in 2013. Volume 3 of the graphic novel adaptation just dropped. And the chaos that fictional Bill Palmer was drowning in looks remarkably familiar to what's unfolding in AI transformations today. Different technology, same organizational physics. IT Revolution published a piece recently connecting the Three Ways to AI adoption. It's worth the read (see link in comments). Here's what stuck with me: 𝑭𝒍𝒐𝒘 breaks when AI lives in a silo. You can have the most capable model in the world, but if it isn't wired into how work actually moves through your organization, it's just a very expensive proof of concept waiting for a strategy. 𝑭𝒆𝒆𝒅𝒃𝒂𝒄𝒌 becomes your safety net when the system moves faster than your ability to supervise it. AI drifts. It hallucinates. The organizations winning aren't the ones with the best models. They're the ones who know within minutes when something goes sideways. 𝑪𝒐𝒏𝒕𝒊𝒏𝒖𝒐𝒖𝒔 𝑳𝒆𝒂𝒓𝒏𝒊𝒏𝒈 𝒂𝒏𝒅 𝑬𝒙𝒑𝒆𝒓𝒊𝒎𝒆𝒏𝒕𝒂𝒕𝒊𝒐𝒏 means small bets and honest retrospectives. Not a single massive implementation and a prayer. Bill figured this out the hard way at Parts Unlimited. Thirteen years later, those same principles are the difference between AI transformations that stick and ones that stall. The Three Ways were never technology-specific. AI just proved they're timeless. #DigitalTransformation #AI #DevOps #ThePhoenixProject #Leadership
To view or add a comment, sign in
-
You can't optimize what you can't measure. codeburn is a terminal dashboard that shows exactly what your AI coding tools are costing. I share tools like this every week in DevOps Bulletin. #devops #ai #opensource
To view or add a comment, sign in
-
-
Day 2 of learning AI + DevOps 🚀 Today I explored RAG (Retrieval-Augmented Generation). One problem with LLMs is they don’t know your data and can give generic answers. RAG solves this by: first retrieving relevant information then using it to generate the answer Simple flow: Convert question into embeddings Search for similar data Send that data to the model Generate a better answer Think of it like: → instead of answering from memory, the system looks at notes first This is used in: chatbots with company data internal tools knowledge assistants Feels like this is where AI becomes actually useful in real systems. #AI #DevOps #LearningInPublic
To view or add a comment, sign in
-
-
Everyone talks about MLOps scaling ML. But what if your sophisticated pipelines are actually slowing down your best ML engineers and killing your competitive edge? The dirty secret of MLOps no one wants to admit. I've seen teams drown in process, building elaborate MLOps systems before even validating their core ML models. The result? A beautiful, automated pipeline delivering… nothing of value, but very efficiently! 😬 Think about it: are your engineers spending more time configuring YAML files than experimenting with new algorithms? Are your rigid deployment processes making it harder to iterate quickly? MLOps is crucial, but it should enable experimentation, not stifle it. Don't let perfect be the enemy of good, especially in the early stages. Find the balance between governance and agility. What's your take? How do you ensure MLOps drives innovation, not kills it? Share your experiences! #MLOps #AI #MachineLearning #DevOps #EngineeringLeadership #CloudEngineering #Innovation #MLDeployment #AI #MachineLearning #MLOps #MLDeployment #AIStrategy #Innovation #EngineeringLeadership #Solopreneur #FounderLife #Intuz
To view or add a comment, sign in
-
-
Day 6 of learning AI + DevOps 🚀 Today I connected all the pieces and understood how a RAG system works end-to-end. There are two main parts: 👉 Data preparation (offline): load data split into chunks convert to embeddings store in vector database 👉 Query time (online): user asks a question convert it into embedding search vector database retrieve relevant chunks send to model generate answer It’s like combining: → search (to find relevant info) → AI (to generate the answer) Feels great to finally see the full picture of how modern AI systems are built. Next step: trying to build a small project around this. #AI #DevOps #LearningInPublic
To view or add a comment, sign in
-
-
AI is not a shortcut to skip thinking. It can absolutely create new forms of toil: debugging bad output, chasing hallucinations, and wrestling with messy prompts. For me, the goal is not "use AI for everything". It is "be ruthless about where AI removes repetitive work and where human judgment is non-negotiable." In my DevOps work, I happily use AI to draft Terraform modules from a clear spec, sketch Kubernetes manifests, propose CI pipeline steps, or generate first-pass runbooks and documentation. That removes a lot of copy paste, boilerplate, and blank-page pain. But I do not trust AI to design production architectures, approve security controls, tune SLOs, or make incident decisions at 2 a.m. Those moments need context, tradeoff awareness, and accountability that you cannot outsource. The mindset shift is to stop arguing about AI as ideology and start thinking in terms of leverage and feedback loops: use AI where you can quickly review, correct, and learn from it, and keep humans firmly in the loop where mistakes are expensive and subtle. #devops #platformengineering #ai
To view or add a comment, sign in
-
-
🚀 AI can generate code at lightning speed — but your CI/CD pipeline might be slowing everything down In this session with Arne Blankerts, you’ll learn how to adapt your delivery pipeline to AI-accelerated development: 🧠 Understand why LLM-generated code can overwhelm traditional CI/CD 💪 Strengthen early feedback loops and shift signals left 🔨 Turn your pipeline back into a driver of speed, not a bottleneck ⚡Learn how to keep up with modern development workflows and make your CI/CD process fit for the age of AI. 📅 Tuesday, June 9th, 26 | 🕘 13:45 - 14:30 | webinale | 📍Berlin 👉 Check out the session: https://lnkd.in/dEYi8E_t #webinale #AIDevelopment #LLM #CICD #DevOps #SoftwareDevelopment
To view or add a comment, sign in
-
-
In my recent projects, I’ve been exploring ways to deploy PyTorch models efficiently in production, and TorchServe has been a game-changer. TorchServe is a framework designed to serve PyTorch models at scale. It allows you to deploy trained models as APIs quickly, handle multiple models at once, and manage versions seamlessly. What I found particularly valuable: Easy model deployment: Package and serve models without writing complex serving code Scalability: Handles high volumes of requests efficiently Multi-model support: Serve multiple models simultaneously and manage versions Metrics & logging: Built-in monitoring for model performance in production For companies building AI products, TorchServe makes it much easier to move from research to production without heavy DevOps overhead. I’m curious—how are others deploying PyTorch models in production? Are you using TorchServe, or do you prefer custom solutions? #AI #MachineLearning #PyTorch #TorchServe #MLOps #ModelDeployment
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development