My personal takeaways on AI agentic coding - KSeF download script

My personal takeaways on AI agentic coding - KSeF download script

I decided to do some vibe-coding and share some thoughts. Of course, I do not want to repeat either the overall hype of AI taking over all of the software engineering work or deny the revolution. So this is a short note from a person who has got strong engineering background but does not do a lot of coding on daily basis. And this is just my experience. Of course, if you want to hear some comments from our software engineers at apptimia , you can check one of their recent blog posts.

At the same time the goal was to create something useful for the community. Have a look at my GitHub repo: https://github.com/mieszkomularczyk/ksefnotifier. It is a simple tool for quick mirroring KSeF invoices (central e-invoice system in Poland) for your company, create PDF visualization and running a simple workflow in your company's accounting. See README.md for details. I made it simple, so even people with little experience in software engineering will be able to make good use of it. Readme is both in Polish and English. Well, actually we use this script in our company, even though book-keeping systems offer seamless integration with KSeF.

Finally, I wrote this article myself. I have not run it through LLM, even for English style correction. Just because I respect any potential readers like you. Internet is full of AI-generated stuff. There is one exception: the feature photo of this article is an AI generated image based just on my LinkedIn profile photo.

Choosing a tool and AI startups

There is multitude of tools you can use for AI agentic coding: Claude Code, OpenAI Codex, Gemini Code Assist, Grok. Of course these are from the big ones, which use their LLM models (Google, OpenAI, Anthropic, xAI). You can use same models from another tools like: GitHub Copilot in VSCode, Cursor (which is a fork of VSCode) etc. Each has got its own subscription model.

Finally I see tens or hundreds of startups offering "improved, focused" AI coding agents, again with their own payment plan and use of the previously mentioned LLMs. They, like hundreds of other AI startups, usually offer some integration layer between the well-known LLM and a specific use-case scenario. Do we need all or even 90% of them at all?

Never mind, since I pay for ChatGPT Plus, I went on with Codex to avoid paying new subscription for similar features.

My observations along the way

It went really quick with the initial outcome. However, a few things were clear right from the beginning:

  • It takes time: each of my prompt took some time to execute and then tests done by the agent. In total I had to run 63 prompts each taking some time to process, even several minutes (why so many? - read on).
  • One needs to have software engineering background to get good results.
  • One gets something working quickly, but then it takes lots of time to make things right.

I wrote a long prompt, gave links to KSeF API documentation and gave instructions that I wanted that to be done in Python. First, The agent did not create virtual environment (venv) and installed a couple of libraries globally. A couple of prompts to revert it and do it right (including creation of requirements.txt - we need other users to be able to install it quickly, so it's nice to have list of libraries for pip, right?).

Then there was quite a struggle with KSeF authorization. Actually, the agent failed to create working code and it tried a couple of times. I checked the docs, and indeed there is a little bit longer chain of auth negotiations. Not just a typical connection with a pre-generated access token. I "told" the agent to check in detail the section of authorization and pay its attention to a slightly more complicated auth procedure. It helped.

After that, there were a couple of edge cases when the script did not work properly. Again, it was quick to generate something working, it took much longer time to make things straight. Wrong logic of input parameters - many redundant ones created out of nowhere. The longest fight was with proper tracking of downloaded.txt (see readme for details what it does). I started doing my own testing and had to again gave 4-5 prompts informing about, not so sophisticated, edge cases to correct.

Finally, there were many refinements, I wanted to make, to polish the product. Every time, the execution took quite some time, even though my changes were simple. Meanwhile, the agent was forgetting a couple of times to update requirements.txt even when it added new libs to local venv.

What went really great at the end was the creation of readme. I gave quite long instructions and there were a few back and forth "discussions". However it saved me from doing what many software engineers hate: writing clean, nice instructions.

Conclusions

Though it was about 2 quite simple scripts, I spent a few hours making my product the way I wanted. Yes, of course, the overall gain was significant, I would otherwise spend a few days coding that (including testing and checking out things that I do not deal with so frequently). And probably I would have never started that project otherwise, because of my time capacity.

My key take-aways are:

  • You can create a working app or product without any programming knowledge, but only experiences software engineers can create real stuff, which is production grade. In more complex systems, they will need to do traditional coding anyway.
  • Agents are weak in creating proper architecture. Unless you tell them what it should be, but again, you need to have prior experience to steer them.
  • You need to understand how the code and systems work and, again, you gain that knowledge with prior experience in developing software on your own.

Will AI fully replace software engineers?

I doubt, in my opinion. Of course, it will replace:

  1. Certain profiles of software engineers - especially those with poor product design and creation knowledge who used to work based on requirements defined in high detail. You can call them juniors, but I think software engineers will need to re-profile themselves. We might need less coders but more engineers with product knowledge.
  2. Some tasks done by software engineers - they will need to focus on different things and use AI as a tool for better productivity. This is nothing out of sort, but a normal evolution. 25 years ago, I did not have IDEs like VSCode with all of its great plugins, code browsing and completion. There weren't that many great libraries available to use them instantly - today we there is an open-source library to tackle almost any problem. So AI is a natural, evolutionary tool to make things quicker.

What worries me however is degradation of competences among engineers. Use it or loose it. Currently, we have senior engineers using AI tools, but how do we train new generation of engineers, not prompters? If they don't train their brains in solving basic problems (this is achieved by experience), will they be able to leap into solving complex problems before getting acquainted with basics in software engineering?

I would compare it to airplane pilots. Today's aircrafts have sophisticated autopilot and assistance systems. Some passenger planes, with proper airport infrastructure, can auto-land. However, only 1% of landings are done this way. Most pilots prefer to land manually to keep their skills sharp. On the other hand, would you fly with a pilot who landed manually 10 years ago or did it just a couple of times in a simulator in a pilot school?

To end, my question is: will you trust the software created mostly by prompting AI code agent? And will we have later enough software engineers who understand how it all works?

Vibe-coding is useful for exploration, but it breaks down once the model starts making architectural choices without a clear boundary on scope, tests, and ownership. The pattern I keep seeing is strong local progress followed by hidden integration debt, especially around state, edge cases, and refactors across files. Curious where you landed on the handoff point between AI-driven iteration and disciplined engineering review.

Like
Reply

I like the term vibe-coding. There's something about working with code that really depends on catching the right flow (especially when it's not just about logic but actually getting stuff done).

The honest postmortems are the most useful in this space. What was the one constraint that mattered most: tests, a spec, or tooling stability?

Like
Reply

Great article! I mostly agree. But the majority of people would have doubted just 10 years ago that something like today's LLMs would ever exist. If the current pace of progress continues, it's hard for me to imagine most of us won't be replaced within the next 10 years. If AI is combined with robotics, physical work will be taken over as well. It's a real risk IMHO.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories