The Reality of Vibe Coding in Software Engineering
These days, there’s a growing discussion around “vibe coding” and how large language models (LLMs) can be used in software engineering or even whether they should be used at all.
Many people have started using them and discovered significant potential, while others strongly dislike the approach and remain skeptical of its value.
For me I’ve identified, several key points for using them on a regular basis and to be efficient
1. Let it handle “manual” work
The solution to a task is the result of a thinking process, and implementation is the translation of that solution into code — though in practice, thinking and coding continuously influence each other. For small, well-decoupled tasks, manual code editing is often routine and time-consuming, and in many cases the first iteration can be assisted by a generative AI.
2. It Applies Best Practices by Default
Well-prompted models tend to produce code aligned with common best practices: clean structure, naming conventions, modularization, and sometimes even design patterns. However, “best practice” is contextual — what works in isolation may not fit a system’s constraints, legacy decisions, or performance requirements. At the same time, they can also adapt solutions to an existing codebase when given sufficient context.
3. It Introduces Review Overhead
Generative AI doesn’t remove work—it shifts it. The time saved in writing code is often reinvested in reviewing it. Engineers must validate correctness, ensure alignment with architecture, and check for subtle issues.
In many cases, reviewing generated code requires more focus than reviewing human-written code, because assumptions and hidden errors are harder to spot.
4. The Right Prompt Saves Significant Time
Prompt quality directly impacts output quality. A vague request produces generic code; a precise prompt produces something much closer to production-ready.
Effective prompts typically include:
Investing time in crafting prompts often saves far more time downstream.
Recommended by LinkedIn
5. Domain Knowledge Is Critical
Generative AI amplifies existing expertise—it doesn’t replace it. Engineers with strong domain knowledge can guide the model, detect mistakes, and refine outputs efficiently.
Without that foundation, it becomes difficult to judge whether the generated solution is correct, efficient, or even relevant.
6. Without Expertise, It Can Be Painful
For engineers lacking subject-matter expertise, generative AI can actually slow things down. Outputs may look convincing but contain subtle flaws, leading to confusion and rework.
This creates a false sense of progress—code is produced quickly, but understanding lags behind.
7. Codebase Familiarity Remains a Major Challenge
Because AI generates code it is harder for engineers to become familiar with the codebase, which will lead to larger problems in the long term.
Without strong codebase knowledge it is difficult to:
This makes codebase knowledge more—not less—important in an AI-assisted workflow.
Conclusion
To use generative AI effectively in software engineering, engineers need to focus on strengthening core skills rather than relying solely on automation:
Also, good considerations on the topic I found recently: https://cekrem.github.io/posts/programming-as-theory-building-naur/ (especially, check out the original article the blog author is reviewing)