Code Fast, No-Code Slow
Gen AI has the advantage of learning from vast pools of code and performs well when dealing with established use cases and problem sets. However, there are still areas where it has a lot of catching up to do. Test sets could also be designed to cover these gaps.
No-Code Tools:
When using no-code tools, if widgets or nodes change across versions and their configuration options are no longer the same, they become just as lost as a business user would be.
Solutions Covering Libraries from Multiple Providers:
Fine-tuning with HF models, datasets, Unsloth, running on Colab, and using CUDA—version compatibility is a vexing and fast-changing topic, and Gen AI struggles to keep up. For instance, Gemini needed several iterations to generate a Colab notebook for a basic use case involving Unsloth, HF models, and datasets. When I asked why it was struggling, it provided a rational explanation—evolving libraries, Colab environment variability, a focus on individual errors rather than the larger goal, and more.
Suggestions:
Please share your thoughts on this. It almost feels like there are good days and bad days for vibe coding.
#VibeCoding ofcourse is good for anyone to get a good start into a topic and bring up maybe a PoC/MVP at best. Post that when modern day dynamics of DevOps, multiple libraries, dependencies etc kick in not so sure that #VibeCoding will make the cut. Early days and things can change rapidly.