Vibecoding as Devops Engineer

Vibecoding as Devops Engineer

As we are ushering into the era of "Vibecoding", Agentic and LLM driven development, I believe there are certain things that Devops/Platform/Cloud/SRE Engineers has to take into consideration to make best use of this trend. The fundamental nature of coding has been changing. Software life-cycle itself becoming very short and we as engineers are not going through every line of code. The vibecoding tools are primarily developed for SWEs but works for a developer might not work for a Devops Engineer. Here are some of the best-practices that can be very useful for everyone.

Statically Typed vs Dynamically Typed

Traditionally, Infra Engineers preferred Python due to its ease of use and readability. Transitioning from Bash scripting, Python made a lot of sense. But it also have a lot of shortcomings being a dynamically-typed interpreted language. Now that we are no longer writing code, often times we may not go through all the changes model is making on our behalf. As the project grows in scope and as we become complacent, there is a real risk of introducing bugs in production. Due to the nature of IaaC, the blast radius will be bigger(when compared to an application or microservice).

Since we are no longer writing the code ourselves, it is better to use more stricter compiled languages like Golang for all infrastructure related tasks and wherever possible. Go also offers other benefits but even if you put them aside, the transition itself is worthy.

Test driven approach

TDD is a long standing practice in software world but in infra world, it is not an usual practice because the code that we write is often simple and not changing much. But since we are no longer writing code, it is a best practice to write unit tests and run them for every PR made using LLM so that we can deploy with confidence. This includes but not limited to writing unit tests for automation, backend, IaaC and helm charts. These helps us catch the errors before deploying.

Context Management

Since the LLM cannot remember anything, every follow-up question does increase the context size and thereby increasing input/output token costs. To avoid this, do not write single file code which spans thousands of lines. Modularize and organize code into respective folders. Be specific about the task at hand and also provide file names in prompt. Having robust files for readme, tasks and implementation helps. I have felt that Kiro is superior in this aspect than Antigravity since Kiro is spec-driven. Models tend to include lot of comments in the code for better readability. Be cognizant about it and manage comments as necessary.

Design Patterns over Syntax

We all know that the requirement drives the design patterns we implement but in the past we might have to adopt simpler patterns due to the practical limitations like steep learning curve and lack of time. Now that we can write code with minimal effort and no longer have to learn syntax, one can take the next step to implement patterns like concurrency, event-driven architecture etc. Also things like Observability/Policy as code. I believe that there can be significant improvements that can be made while saving costs by adopting these practices.

Conclusion

Reading and writing code is and always will be a revered skill to have. As the fundamental nature of writing code is changing so must our approach towards it.

TL;DR

As LLM-driven development becomes standard, DevOps engineers should: use statically-typed languages like Go instead of Python to catch AI-generated errors at compile-time; adopt test-driven development with unit tests for all infrastructure code; modularize code into smaller files to reduce LLM context costs; and leverage AI to implement sophisticated design patterns previously too time-consuming to adopt. Everything as code

Vibecoding highlights how AI-driven infrastructure code can break whole stacks if mismanaged. At ConnectiveOne we saw this early on: letting an agent write Terraform without clear patterns caused cascading outages. We addressed it by defining strict templates for resource creation, embedding infrastructure tests into CI/CD, and requiring human review on AI-generated changes. Vibecoding is powerful only when you wrap it with safeguards.

Like
Reply

I agree with all of this—but my gut reaction was still “you couldn’t pry Python out of my cold, dead hands.” Introducing Go (or any new primary language) would mean a massive overhaul of existing tooling, pipelines, and operational muscle memory—at a time when we’re expected to move faster, not pause for a multi-year refactor. Instead, I think the pragmatic middle ground is to make Python more LLM-friendly: – Tell your agent to generate tests by default – Use validation and typing tools like mypy (and friends) as guardrails – Be explicit and repetitive in documentation to remind the agent—and future humans—what types and contracts are expected Strong typing is a huge advantage for vibecoding, but so is respecting the reality of mature ecosystems and delivery pressure.

Like
Reply

AI performs best when guided by clear rules, well-structured prompts, and proper curation. It may know a lot, but outcomes improve significantly when the goal is defined precisely. “Vibe coding” works well when there’s upfront planning with the tool. Even when things break, AI can help diagnose and fix issues quickly. The key is to enjoy coding while learning how to prompt and guide tools effectively to get the right results.

Perfectly said

  • No alternative text description for this image

Yes, vibe coding democratizing coding

To view or add a comment, sign in

More articles by Srujan Reddy Pakanati

  • In response to Anthropic distillation concerns

    Disclaimer: Though my loyalty always lies with India, my success is intertwined with the technological success of USA…

  • Rapid Ubiquitousness of AI

    Let me jot down this article before I start pondering about something else. In recent days, I have been wondering about…

Others also viewed

Explore content categories