📃 𝗔𝘀𝗸𝗶𝗳𝘆𝗣𝗗𝗙 - 𝗣𝗗𝗙 𝗔𝗜 𝗰𝗵𝗮𝘁 Lately, I've been really interested in how Retrieval Augmented Generation (RAG) actually works behind the scenes. So, to get hands-on experience, I decided to build AskifyPDF, that lets me chat with my PDFs. --- Here's a quick breakdown of how everything works: 1. When you upload a PDF, the React frontend securely pushes it to Supabase Storage. 2. A FastAPI backend immediately downloads the file, extracts the text, and intelligently divides it into overlapping semantic chunks, preserving the original page numbers for every single chunk. 3. These chunks are converted into high-dimensional vector embeddings (via local LLM inference) and uploaded to a Pinecone Vector Database. 4. When you type a query, the backend embeds your question, runs a similarity search against Pinecone, and isolates the most relevant paragraphs from that specific document. 5. The retrieved context is fed into a locally running Mistral LLM with strict instructions to answer only based on the text provided. 6. The AI generates the answer along with structured citations. Back on the frontend, these citations become interactive buttons. Click one, and the pdf viewer instantly leaps to the exact source page so you can verify the AI's claims yourself. 💻 Stack: React (Vite), FastAPI, Supabase, Pinecone, Local Mistral (Ollama). --- Overall it was fun building a tiny project! I will be experimenting more and adding fun features later. --- #RAG #ReactJS #Python #MachineLearning
Really cool build!! 👏🏻👏🏻
It's amazing! 👏👏👏👏 Just wanted to know, How did you make the video?