I wanted to develop a RAG-based chatbot where users can upload a PDF and chat about its content. My goal was to use Cursor AI code editor to generate the codebase with a single prompt. Here’s what I learned from the experience.
- The most important thing is how you write the prompt. I tried multiple versions, from simple to very detailed, and saw how each affected the codebase and debugging. A clear, detailed prompt made a huge difference.
- In the first few tries, the code needed a lot of debugging. I used Cursor to troubleshoot by copying errors and applying its fixes. Sometimes, this felt endless, and I gave up on certain issues.
- In later attempts, I debugged manually, which was tough since I didn’t write the code myself. It took time to understand the flow.
- In the final rounds, I focused on writing better prompts instead of fixing issues. This hit the mark where I had minimal problems (mostly just port conflicts or Python library conflict issues).
I now have a very basic working RAG chatbot where I can upload a PDF and query it. Although, It’s simple and basic, handling one document at a time and only PDFs, but it’s a start and can be improved.
- A better prompt leads to better code. My final prompt was detailed, clear, and gave specific instructions.
- You can either generate the whole codebase at once or build it step-by-step. For example, start with code for PDF extraction, then add chunking, vectorization, and retrieval logic once each part works.
- To write a good prompt, it’s not just about clear language. Knowing the technology behind what you’re building helps you give better instructions.