I've been building TerpNav's backend without leaning on AI, and it's been significantly harder. That was the point. I migrated from Flask to Django and built the backend using Django REST Framework. What I have right now: clean URL routing, API endpoints serving data, API keys saved in .env files and a project structure I understand end to end because I built it line by line. What I don't have yet: a real database, authentication, rate limiting, HTTPS config, or environment-based secrets management. The data currently lives in flat JSON files. That's the honest state of the project. But here's what I've mapped out next and actually understand well enough to implement: PostgreSQL with Django models (replacing the JSON files), Token authentication via DRF, Rate limiting with django-ratelimit, Secrets managed through environment variables, deployed behind HTTPS The hardest part isn't the code. It's slowing down enough to understand what I'm actually building. Every concept that AI used to abstract away is something I now have to research, break, and fix myself. That's the trade-off I made. It's worth it. Code is available on my gitHub #Django #Python #FullStackDevelopment #TerpNav #BuildInPublic
Building TerpNav Backend with Django and No AI
More Relevant Posts
-
Day 62 of 100 Days of AI — 🔗 Everything is Connected Today was one of those days where everything clicks into place. Every piece of the AI Newsletter is now talking to each other. Cloudflare Worker scrapes the sources on schedule → sends raw content to the Python backend → LangChain agents curate, synthesize, and rewrite → Jinja2 renders the final HTML email → Resend delivers it to subscribers. Next.js handles the frontend — clean landing page, subscribe form, all connected to the backend. OpenRouter powering the LLM calls. LangGraph orchestrating the agent pipeline. FastAPI holding everything together in the middle. Seven different services. One clean flow. Zero manual steps. The part that felt most satisfying — watching a raw scraped article go all the way through the pipeline and come out the other end as a polished newsletter section. Automatically. No human involved. That's what 62 days of building toward this feels like. Tomorrow — production deployment. Real subscribers. Live. Next: Production deployment — the newsletter goes live. #100DaysOfAI #BuildInPublic #LangChain #LangGraph #CloudflareWorkers #FastAPI #NextJS #AIEngineering #Newsletter #OpenRouter #Python
To view or add a comment, sign in
-
Excited to share my latest project: an AI-powered document query system Over the past few weeks, I built a full-stack application that lets users upload documents and interact with them using natural language — basically turning static files into something you can actually talk to. One thing I really focused on was getting the pipeline right. Instead of writing everything in one script, I used LangGraph to structure the RAG pipeline as a stateful workflow. This helped me clearly separate each step — document parsing, chunking, embeddings, vector search, and final response generation. The biggest advantage? It made the system way easier to debug, extend, and scale. Plus, handling more complex queries feels much cleaner when the state is properly managed. Under the hood, the system combines semantic search with LLM reasoning to return context-aware answers instead of generic responses. Tech stack I worked with: • Backend: Python, FastAPI, Uvicorn • AI/ML: LangChain, LangGraph, OpenAI • Vector DB: Qdrant • Database: PostgreSQL • Frontend: React (Vite) + Tailwind CSS • Infra: Docker & Docker Compose Would love to hear if others are experimenting with LangGraph or RAG pipelines — always open to learning and improving! Git Repo: https://lnkd.in/gUJMvxRs #AI #MachineLearning #RAG #LangChain #LangGraph #FastAPI #React #Docker #Qdrant #PostgreSQL
To view or add a comment, sign in
-
🚀 FastAPI vs REST API — What’s the Difference? Many developers confuse FastAPI with REST API, but they are not the same thing 👇 🔹 REST API (Architectural Style) REST (Representational State Transfer) is a design pattern for building APIs. It defines how clients and servers communicate over HTTP using methods like GET, POST, PUT, DELETE. ✔️ Language-agnostic ✔️ Widely adopted standard ✔️ Focuses on structure & principles 🔹 FastAPI (Framework) FastAPI is a modern Python framework used to build APIs, often following REST principles. ✔️ Built with Python 🐍 ✔️ High performance (comparable to Node.js & Go) ✔️ Automatic API docs (Swagger UI) ✔️ Async support out of the box ✔️ Data validation using Pydantic ⚖️ Key Difference 👉 REST is how you design APIs 👉 FastAPI is a tool to implement APIs 💡 In Simple Terms: You can build a REST API using FastAPI, Django, Express, or any framework — FastAPI is just one of the fastest and most developer-friendly options today. 🔥 When to Choose FastAPI? - Building high-performance APIs - Working with Python ecosystem - Need auto docs & validation - Creating AI/ML backend services 📌 Final Thought: REST gives you the blueprint 🏗️ FastAPI helps you build it faster ⚡ #FastAPI #RESTAPI #Python #WebDevelopment #BackendDevelopment #API #SoftwareEngineering #Coding #Developers #Tech
To view or add a comment, sign in
-
#Day_24/100: Last day of polish on HERVEX. Here's what changed from my original vision. 14 project days ago I started with a simple idea: build an API where you give an AI a goal and it figures out how to accomplish it. The idea didn't change. Almost everything else did. I was going to use LangChain from day one. I skipped it entirely — I wanted to understand every layer before any framework hid it from me. The planner, executor, memory, and aggregator are all custom. I was going to use PostgreSQL. I chose MongoDB — agent outputs are unstructured by nature and forcing a rigid schema would have meant migrations every phase. I was going to call it "Autonomous AI Agent API". I renamed it HERVEX — derived from my own name, Heritage — built to sound like something that executes precisely and without hesitation. The frontend didn't happen. That's not failure — that's scope discipline. What HERVEX v1.0.0 actually is: You submit a goal. It plans, executes, searches the web, reasons with memory, and returns one complete result. No hand-holding. Stack: Python · FastAPI · Groq · Celery · Redis · MongoDB · Tavily What building this taught me: → Architecture decisions made early are the ones you live with longest → Scope discipline is not failure → Building in public creates accountability you can't manufacture → Understanding a system deeply before abstracting it makes you a better engineer HERVEX is done. On to the next build.🚀 #BuildingInPublic #AgenticAI #Python #FastAPI #BackendEngineering #HERVEX #100DaysOfCode #ProjectDay14
To view or add a comment, sign in
-
-
Most people use FastAPI the same way they used Flask. That's the problem. Here's what building actual production backends with FastAPI taught me Depends() is not just for auth It's how you wire up DB sessions, rate limiters, feature flags, service layers. If your route handlers are doing business logic your architecture is already wrong. async def means nothing if you're still blocking Calling sync SQLAlchemy or requests inside an async function blocks the entire event loop. Use asyncpg, SQLAlchemy 2.0 async, run_in_executor for CPU work. Async is a commitment not a label. Pydantic schemas are not your ORM models Keep them separate. Your DB structure will change. Your API contract should not change with it. BackgroundTasks will silently lose your jobs Process dies, tasks die. For emails, webhooks, anything that actually matters use Celery + Redis or ARQ. Don't find this out in production. Lifespan over startup/shutdown The asynccontextmanager lifespan pattern is cleaner, testable, composable. If you're still using the old @app.on_event decorator you're behind. The framework is solid. Most teams just never go beyond the tutorial defaults. What's a FastAPI pattern your team actually uses in prod? Curious to hear #FastAPI #Python #BackendDevelopment #APIDesign
To view or add a comment, sign in
-
One instruction file doesn't scale. The moment your codebase has a Python service and a TypeScript frontend and a Go worker, a single CLAUDE.md becomes either too generic to be useful or too bloated to trust. Scoped context solves this the way filesystems already do — by nesting. Org-level rules wrap user-level rules wrap project-level rules wrap directory-level rules. The agent reads whichever scope it's working inside, the same way a developer picks up conventions walking into a new folder. Example: the org says "never commit .env files." The project says "use Zod for validation." The ./src/api/ directory says "return JSON, validate schema." The agent sees all three, cleanly composed. The trade-off is discoverability. When rules live in four places, it's harder to answer "what does the agent actually see right now?" Good tooling here isn't optional — it's the whole pattern. Treat context as a tree, not a file. How are you organizing rules across a multi-language codebase? #AI #AgenticAI #SoftwareArchitecture #DeveloperTools #Clausey
To view or add a comment, sign in
-
-
Built a full AI Agent system on Django. Tool System, Agent Loop, Multi-Agent, Streaming, and RAG — single project, unified architecture. 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀: → Agent Loop: A while loop that checks whether the LLM returned a function call or a text response. If function call — execute the tool, append the result to context, send it back to the LLM. Repeat until it returns text. → Tool System: Strategy Pattern on top of a BaseTool abstract class. Each tool implements name, description, parameters, and execute(). ToolRegistry handles central registration — adding a new tool = 1 class + 1 line of register. → Multi-Agent: Inter-agent communication layer. Researcher agent gathers data, Validator agent verifies, Reporter agent formats the output. Each agent runs independently with its own system prompt and tool set. → Streaming: Token-level real-time delivery via SSE (Server-Sent Events). StreamingHttpResponse on the Django side, EventSource on the frontend. → RAG Pipeline: Chunk documents, convert to embeddings, index in a vector DB. On user query, run similarity search to pull the most relevant chunks and inject them as context to the LLM. → Memory: Persistent conversation history via Conversation & Message models. The agent carries prior context into the LLM's context window. Stack: Django + DRF / Gemini API Function Calling / SQLite + Vector DB #AIAgents #Django #Python #LLM #RAG #Gemini #MultiAgent
To view or add a comment, sign in
-
-
𝐂𝐥𝐚𝐮𝐝𝐞 𝐂𝐨𝐝𝐞 has native OpenTelemetry (check links). 𝐂𝐮𝐫𝐬𝐨𝐫 does not, but it has something else: a hooks system. And the community has already built the bridge. The Cursor hooks system fires pre/post events for every agent action — tool calls, shell commands, MCP server calls, file edits, session lifecycle. It was designed for governance tooling. Turns out it's also exactly what you need to emit OTLP spans. I evaluated two implementations: → 𝐜𝐮𝐫𝐬𝐨𝐫-𝐨𝐭𝐞𝐥-𝐡𝐨𝐨𝐤 (by 𝐋𝐚𝐧𝐠𝐆𝐮𝐚𝐫𝐝) — Python, single setup script, any OTLP backend, built-in data masking, GenAI semantic conventions → 𝐜𝐮𝐫𝐬𝐨𝐫-𝐥𝐚𝐧𝐠𝐟𝐮𝐬𝐞 (by 𝐧𝐚𝐨𝐮𝐟𝐚𝐥𝐞𝐥𝐡) — JS/npm, Langfuse-specific, more setup steps, no privacy controls 𝐜𝐮𝐫𝐬𝐨𝐫-𝐨𝐭𝐞𝐥-𝐡𝐨𝐨𝐤 is the one I'd recommend — and not just for the simpler install. It uses the same CNCF GenAI semantic conventions that Claude Code's native OTEL uses. That matters when you're eventually pulling data from both tools into the same backend. 𝗪𝗛𝗔𝗧 𝗬𝗢𝗨'𝗟𝗟 𝗦𝗘𝗘 𝗢𝗡𝗖𝗘 𝗜𝗧'𝗦 𝗥𝗨𝗡𝗡𝗜𝗡𝗚 1. 🔧 𝗖𝘂𝗿𝘀𝗼𝗿 𝗿𝘂𝗹𝗲𝘀 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗻𝗲𝘀𝘀 — which .cursorrules produce clean single-pass completions, and which trigger expensive multi-turn correction loops 2. 🔌 𝗠𝗖𝗣 𝗰𝗮𝗹𝗹 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 — which servers get invoked, how often, and whether any add latency without value 3. 💰 𝗖𝗼𝘀𝘁 𝘃𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 — token burn by session, by tool, by model 4. 🔄 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 𝘀𝗶𝗴𝗻𝗮𝗹 — Cursor rules are your tuning lever; telemetry shows you where to pull it The bigger picture: once traces from Claude Code and Cursor land in the same Langfuse instance, you can start building evaluation loops — comparing tools, prompts, and agent behaviors on real data. That's the next step. More on that soon. Start collecting traces now, privately, even locally, so you have the data to evaluate once the tooling matures. Have you tried 𝐜𝐮𝐫𝐬𝐨𝐫-𝐨𝐭𝐞𝐥-𝐡𝐨𝐨𝐤 or 𝐜𝐮𝐫𝐬𝐨𝐫-𝐥𝐚𝐧𝐠𝐟𝐮𝐬𝐞? Curious what you're using, and how you're making use of it! #Cursor #OpenTelemetry #Langfuse #AIObservability #DevTools #CodingAssistant (Links in first comment)
To view or add a comment, sign in
-
🚀 Introducing ALGO_TRACKER.AI – Bridging Machine Learning with Static Code Analysis for Python. As software systems scale, quantifying Technical Debt and maintainability becomes crucial. Traditional rules-based linters often miss the complex interplay of metrics that define genuine code risk. To address this, I built ALGO_TRACKER.AI, an intelligent auditor that moves beyond rigid rules. It leverages a trained XGBoost model to analyze static code metrics (LOC, Cyclomatic Complexity, Halstead Metrics) recursively fetched from any public Python repository via the GitHub API. The goal is simple: Provide developers and tech leads with a predictive, probability-based "Bullish" (Clean/Maintainable) or "Bearish" (High Technical Debt) rating for their codebase. Key Features: 🔹 Deep recursive scanning of Python (.py) files using GitHub’s /git/trees API. 🔹 Static Metric Extraction (Radon/Lizard) to quantify complexity. 🔹 Intelligent Risk Prediction using an optimized XGBoost classifier. Tech Stack (High Performance & Scalable): ⚛️ Frontend: React, Tailwind CSS (Deployed on Netlify) ⚡ Backend: FastAPI (Python), (Deployed on Railway) 🤖 Machine Learning: Scikit-learn & XGBoost Check out the working prototype here: https://lnkd.in/g2tVERcH #MachineLearning #SoftwareEngineering #Python #FastAPI #ReactJS #FullStack #ArtificialIntelligence #Innovation
To view or add a comment, sign in
-
🚀 Scrapling: A Game-Changer in Web Scraping I explored D4Vinci/Scrapling and it stands out as a modern, adaptive web scraping framework built for real-world use cases. 💡 Why it matters: 🧠 Auto-adapts to website structure changes 🕷️ Supports static + dynamic + anti-bot pages ⚡ Built for scalable crawling 🤖 AI-ready for RAG and agent workflows 🔥 It bridges traditional scraping with modern AI data pipelines. https://lnkd.in/gpzAZNP8 #WebScraping #AI #Python #Automation #DataEngineering #OpenSource
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development