Anton Rodzevich’s Post

I’ve been building a side project: a web-based combat tracker for a custom TTRPG. You can check out the repo here: https://lnkd.in/dZrM-mhe. I ran the full delivery loop, requirements through tests, while tightening agentic pipelines so they could run on trial-tier models and still land close to what I'd get from heavier ones. The bet was that clearer prompts and smaller scopes would do more than burning tokens, and that's where most of the learning actually happened. On the app itself: I drafted and refined requirements and scope in markdown in the repo (requirements-done, backlog notes) so changes could be checked against written intent. I used those pipelines to turn ideas into small, agent-ready stories. For design, Stitch let me iterate on layout and tone early; screens were then built as Flask templates and static assets so they still matched real routes, forms, and Socket.IO events. The stack is Flask + SQLAlchemy + SQLite, with Socket.IO for live updates; I added pytest where it helped, plus browser automation only where it paid off, and a one-command DB init so a fresh clone isn’t blocked on missing tables. The Python backend is mine line by line, with AI used in a teaching / review mode rather than "write the app for me" mode, which for me beat a generic paid course. This isn't evidence that agents replace engineers. It's one more example of using AI as leverage on a loop you still own. If you're trying something similar, the README and branch layout are meant to read without insider context; you're welcome to reuse the Skills in the repo if they help. If you’re using Cursor or similar tools, the practical suggestion is the same: treat AI as leverage on that loop, not as a substitute for thinking. #Python #Flask #Cursor #AgenticAI #OpenSource #TTRPG

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories