Most beginners start a Flask project like this: app.py routes.py models.py Everything works… until the project grows. Then suddenly the codebase becomes messy: • Business logic inside routes • Database queries everywhere • Hard to test • Hard to scale So I built something to solve this problem. 🚀 Flask Scalable Template A production-ready Flask project structure designed to keep applications clean and maintainable as they grow. Instead of putting everything in one place, the project separates responsibilities: services → business logic middleware → Middleware models → database layer utils → reusable helpers config → environment configuration Why this matters: • Cleaner architecture • Easier testing • Better maintainability • Scales well for larger projects This template is especially helpful for developers who struggle with “Where should I put this file?” in Flask projects. Building this helped me understand how backend applications should be structured in real-world projects. If you're learning Flask or building backend APIs, this might help you start with a better architecture. I’d love to hear your feedback or suggestions. GitHub link in the comments 👇 #Flask #Python #BackendDevelopment #OpenSource #WebDevelopment
Flask Scalable Template for Clean Architecture
More Relevant Posts
-
🚀 Built a Production-Grade RAG System with Spring Boot Most GenAI projects stop at “chatbot with PDFs”. I wanted to go deeper - and build something closer to how real backend systems work. Over the past few days, I designed and implemented a Spring Boot-based RAG system that ingests and queries knowledge across multiple sources. 🔧 What I built: 1️⃣ Ingestion Layer Multi-source connectors for PDFs, wiki documents, and PostgreSQL — with incremental updates. 2️⃣ Chunking Strategy (Critical for RAG) PDFs → fixed-size pragmatic chunks Wikis → semantic sections DB → row-level chunking 3️⃣ Embeddings + Vector Store Ollama embeddings Redis vector store with rich metadata (source, timestamps, access rules) 4️⃣ Retrieval Pipeline Semantic search + metadata filtering Supports domain constraints (date ranges, visibility rules) 5️⃣ Prompt Orchestration Grounding rules + structured prompts Source-aware responses with citations (reduced hallucination) 6️⃣ Knowledge Lifecycle Management Admin APIs to ingest/delete sources independently No need for full re-indexing 💡 Key Learnings: ✔️ Chunking strategy matters more than model choice ✔️ Metadata design = production-grade retrieval ✔️ Local models (Ollama) are great for cost-efficient development ✔️ RAG systems are fundamentally backend + data problems, not just AI 🧱 Tech Stack: Java 17 • Spring Boot 3.5 • Spring AI • Redis • PostgreSQL • Ollama 🔗 Repo: https://lnkd.in/dR5drAU3 Next: → Adding React UI for interactive querying → Writing a deep-dive on RAG system design & trade-offs #SpringBoot #RAG #AIEngineering #Backend #Java #SystemDesign
To view or add a comment, sign in
-
⚙️ Day 3 — Backend finally started making sense… Spent the last couple of days setting up the structure. Earlier I used to think backend is just writing functions. But now I see it more like a flow: user → request → Flask → response So far I’ve built: ✔ Flask app with proper folder structure ✔ Routes using Blueprints (to keep code modular) ✔ Basic pages like login and dashboard Also started understanding what actually happens when we hit a URL: → Browser sends request → Flask matches route → Function executes → render_template returns HTML → Page loads on UI This small flow cleared a lot of confusion for me. Before this, I was just writing code… Now I understand why things work. I never thought something this simple could clear so many doubts. Next: Adding database (SQLAlchemy) + authentication (Flask-Login) Feels like I’m finally moving from “learning” → “building” If you’re learning backend, try drawing the flow once — it really helps. What concept became clear for you only after visualizing it? 👀 #flask #backenddevelopment #learninginpublic This is the simple flow I followed while building (attached below 👇)
To view or add a comment, sign in
-
-
𝙁𝙧𝙤𝙢 𝘿𝙖𝙩𝙖𝙨𝙚𝙩 𝙩𝙤 𝘿𝙚𝙥𝙡𝙤𝙮𝙚𝙙 𝘼𝙋𝙄: 𝙈𝙤𝙫𝙞𝙚 𝙍𝙚𝙘𝙤𝙢𝙢𝙚𝙣𝙙𝙖𝙩𝙞𝙤𝙣 𝙎𝙮𝙨𝙩𝙚𝙢🎬🚀 I’m really enjoying the journey of bridging the gap between ML and Full-Stack Development . From data preprocessing → model building → API → containerization → deployment → web integration, this project helped me understand the complete workflow of deploying a machine learning system T𝐞c𝐡 𝐒t𝐚c𝐤: Python (Scikit-learn) | FastAPI | Docker | .NET Core | Vue.js 𝑻𝒉𝒆 𝑾𝒐𝒓𝒌𝒇𝒍𝒐𝒘 :- 𝐃𝐚𝐭𝐚𝐬𝐞𝐭: Used the TMDB dataset to train a movie recommendation model. 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐜𝐞𝐬𝐬𝐢𝐧𝐠: Performed preprocessing, selected relevant fields, and generated tags for each movie. 𝐓𝐞𝐱𝐭 𝐕𝐞𝐜𝐭𝐨𝐫𝐢𝐳𝐚𝐭𝐢𝐨𝐧: Applied Bag of Words to convert text into vectors. 𝐓𝐞𝐱𝐭 𝐂𝐥𝐞𝐚𝐧𝐢𝐧𝐠: Implemented stop-word removal and stemming to improve the quality of features. 𝐒𝐢𝐦𝐢𝐥𝐚𝐫𝐢𝐭𝐲 𝐂𝐚𝐥𝐜𝐮𝐥𝐚𝐭𝐢𝐨𝐧: Used cosine similarity to identify the closest movies and generate recommendations. 𝐀𝐏𝐈 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭: Exposed the model through a FastAPI endpoint. 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐢𝐳𝐚𝐭𝐢𝐨𝐧: Containerized the application using Docker. 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭: Pushed the Docker image to Docker Hub and served the model. 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 & 𝐖𝐞𝐛 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧: Built the web interface using ASP.NET Core MVC with Vue.js. 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞 𝐰𝐢𝐭𝐡 𝐦𝐲 𝐦𝐨𝐝𝐞𝐥? If you are learning Frontend, Backend, or API Integration and want a working ML model to test your skills , don't worry . You can pull my trained model directly from Docker Hub and start building your own interface around it: 👉 𝒅𝒐𝒄𝒌𝒆𝒓 𝒑𝒖𝒍𝒍 𝒉𝒂𝒎𝒛𝒂1086/𝒎𝒐𝒗𝒊𝒆-𝒓𝒆𝒄𝒐𝒎𝒎𝒆𝒏𝒅𝒂𝒕𝒊𝒐𝒏-𝒎𝒐𝒅𝒆𝒍 #MachineLearning #DataScience #DotNet #Docker #FastAPI #VueJS #WebDev
To view or add a comment, sign in
-
Your FastAPI backend is fast to build. But is it fast to run? Most developers find out the answer at the worst possible moment when real users hit it at the same time. Endpoints slow down. Requests pile up. Users drop off. Not because the code is wrong. Because it is blocking. Here is what blocking actually looks like in production: Your user hits an endpoint. FastAPI calls the database. That query takes 200ms. During those 200ms your server is frozen. Not slow. Frozen. Every other request sits in a queue waiting for that one query to finish. 100 users hit your API at the same time. User 1 gets served. Users 2 to 100 wait in line. That is sync. That is blocking I/O. FastAPI was built to never work that way. With async/await while your database query runs in the background, your server is already picking up the next request. And the next. And the next. 200ms of database wait becomes invisible to every other user. In real backend terms. SYNC — blocks: def get_orders(user_id: int): return db.query(user_id) ASYNC — non blocking: async def get_orders(user_id: int): return await db.query(user_id) Same logic. Same database. Same server. But now 100 users get served in the time it used to take to serve 1. This matters even more when your endpoints call external services. 1. Payment gateway 300ms wait. 2. AI model response 2 to 3 seconds wait. 3. Email service 500ms wait. Sync every user feels every millisecond of every one of those waits. with Async none of them do. FastAPI gives you non-blocking I/O natively. No extra setup. No plugins. No workarounds. Just write async. Add await. Let FastAPI handle the rest. Your backend was already fast to build. Now make it fast to run. Are you using async endpoints in your FastAPI projects? 👇 #FastAPI #Python #BackendDevelopment #AsyncProgramming #SoftwareEngineering #APIDesign #PythonDeveloper #WebDevelopment #TechIn2026 #BuildInPublic
To view or add a comment, sign in
-
-
🚀 𝗙𝗹𝗮𝘀𝗸 𝘃𝘀 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 If you have worked with Python for backend development, you have probably come across Flask and FastAPI. Both are powerful, but they serve slightly different purposes depending on your use case. 🔹 Flask is a lightweight and flexible micro-framework. It’s been around for years and has a huge community. You get full control over how you structure your application. However, that flexibility comes at a cost — you often need to write more boilerplate code and manage things like validation and async handling manually. 🔹 FastAPI, on the other hand, is relatively newer but built for modern APIs. It leverages async programming and type hints, making it incredibly fast and developer-friendly. ⚡ Why is FastAPI faster? FastAPI is built on Starlette (for async support) and Pydantic (for data validation). It uses asynchronous request handling, which allows it to process multiple requests efficiently without blocking the server. 🐢 Why is Flask slower? Flask is primarily synchronous. While you can use async with Flask, it’s not its core strength. For high-concurrency applications, this can become a bottleneck. 🧠 When to use Flask? 1. Small to medium projects 2. Simple APIs or web apps 3. When you need flexibility and full control ⚡ When to use FastAPI? 1. High-performance APIs 2. Microservices architecture 3. Real-time or async-heavy applications 4. When you want automatic validation and documentation 𝗦𝘂𝗺𝗺𝗮𝗿𝘆 - Flask is like a blank canvas — simple and flexible. FastAPI is like a smart toolkit — optimized and ready for scale. Both are great — the choice depends on your project needs, not just speed. #Python #FastAPI #Flask #BackendDevelopment #WebDevelopment #APIDesign #SoftwareEngineering #Programming #Developers #TechCommunity #CodingLife #LearnToCode #AsyncProgramming
To view or add a comment, sign in
-
Most token optimization tools look at your config: unused skills, bloated system prompts, model routing. Tokenomics looks at something different: how you actually work. Roni Koren Kurtberg ran both approaches side by side on 240 Claude Code sessions and found completely different problems. Config audits catch the overhead you pay before typing a word. Behavioral analysis catches the patterns you repeat session after session without realizing it. 43% context snowball rate. 645 redundant file reads. 44% of sessions with unbounded bash output. No settings file will fix those. You need to see them first. Great post with real numbers, not theory.
I burned through 1.63 BILLION tokens in 30 days on Claude Code. ~$850 in API costs. Before you panic: 90% of those tokens were cached context replays at $0.30-$0.50/MTok. The real damage? 4.9M output tokens at $15-25/MTok and cache writes. That's where the money actually goes. I ran two tools to understand where it all went: Tokenomics (https://lnkd.in/dBrFv_jE by Gal Naor) analyzed 240 sessions: 🏆 Context Snowball: 43% of sessions had runaway growth. One project ballooned 27x. ~277M tokens wasted. 📖 File Re-reads: unchanged files re-read 645 times. One file read 44 times. It never changed. 🔍 42 consecutive file reads in one session dumped into context. Could've used a subagent. 💻 Bash Bloat: 44% of sessions had unbounded output. "Here Claude, hold this 10,000 lines." 👻 Phantom MCP: one server across 240 sessions, used in 0. 🧠 Opus Overkill: 7% of sessions used Opus for single-file edits. 5x cost, identical results. Token Optimizer (https://lnkd.in/dYViXfNi) audited my config (134 sessions, 1.36B tokens): • 65.1% of tokens (886M) went to Opus on routine work. Haiku: 0.2%. • 76K tokens/session on 61 unscoped rule files. C++, Swift, Go rules in a Java project. • 45 duplicate skills, 46 of 47 never used in 30 days • 118 plugin skills, only ~10 relevant to my stack • BASH_MAX_OUTPUT_LENGTH not set. 2,450 bash calls dumping unlimited output. Overhead: ~49K tokens before typing a word. Full potential: ~90K tokens/session eliminated + 40-50% cost reduction. This week I'm cleaning up and implementing both tools' suggestions. Will post a follow-up with real savings and whether output quality improved or took a hit. Run both tools, each gives you different interesting stuff. #ClaudeCode #AI #DeveloperTools #TokenOptimization #LLM
To view or add a comment, sign in
-
Waking up to the news that "Claude Code CLI" source code got exposed. 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱? - A file named "cli.js(.)map" was included in version 2.1.88 of the @anthropic-ai/claude-code package. - This file was a ".map" meant for internal debugging. It pointed directly to a ZIP archive of the original, un-minified TypeScript code sitting in Anthropic's cloud storage. The Scale: Approximately 512,000 lines of code and 1,900 files were exposed. 𝗛𝗼𝘄 𝘁𝗵𝗶𝘀 𝗰𝗼𝘂𝗹𝗱 𝗵𝗮𝘃𝗲 𝗮𝘃𝗼𝗶𝗱𝗲𝗱? - Developer (or auto generated AI code) misses to update ".map" in ".npmignore" - When a developer runs npm publish, the tool looks for a file called .npmignore. If the developer forgets to tell NPM to "ignore" .map files, they get uploaded to the public registry along with the app. On key lesson learned personally: Moving forward - move away from JS frameworks to more secure programming languages for Backend interactions. ⏺ The JS framework Trade-off The only reason companies stay with JS is Development Speed. It is faster to prototype in TypeScript. However, the Claude Code leak proved, that speed comes at the cost of total exposure if one developer makes a single mistake in a config file. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗨𝘀𝗲𝗿 𝗶𝗺𝗽𝗮𝗰𝘁? While no user data was leaked, the code revealed several of Anthropic's internal projects and strategies, 𝗖𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗼𝗿𝘀 𝗰𝗮𝗻 𝗻𝗼𝘄 𝘀𝗲𝗲 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝗵𝗼𝘄 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝘀𝗼𝗹𝘃𝗲𝗱 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 𝗹𝗶𝗸𝗲 "𝗮𝗴𝗲𝗻𝘁 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻" 𝗮𝗻𝗱 "𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝗺𝗲𝗺𝗼𝗿𝘆 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁," which was previously a closely guarded secret. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗶𝘁 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗖𝗹𝗮𝘂𝗱𝗲 𝗨𝘀𝗲𝗿𝘀? If you're using version 2.1.88, update immediately to version 2.1.89 or higher. Anthropic has already pulled the "leaky" version from the registry, but the code itself is now widely mirrored across the internet. While Claude has already took down the repo, the repo was already mirrored. Check the Claude code mirror repo link in comments. Developer already started porting the code to "Rust" . Repo already gained 77K plus stars within 12 hours. Saravanan Gnanaguru
To view or add a comment, sign in
-
Your Django app is lying to you. Every slow query in production? Logged as this: [WARNING] query took 1,847ms That's it. No SQL. No plan. No cause. Just a number mocking you. And you're expected to fix it. How? Reproduce it locally? Good luck — your dev DB has 200 rows. Prod has 4 million. Guess the index? Maybe. Probably wrong. Wait for it to happen again and stare harder? This is actually what most teams do. I got tired of this. So I built a 40-line interceptor that runs in production. Every slow query now logs this automatically: → Exact SQL → Execution time → Full EXPLAIN ANALYZE output → Buffer hits, seq scans, nested loops — all of it Before I even open Slack. How it works: → Hooks Django at the cursor level via connection_created signal → Times every query with monotonic_ns (zero drift) → Slow? Fires EXPLAIN ANALYZE on a separate connection → Never touches your active transaction → Structured JSON — straight into your log pipeline No dependencies. No middleware. No debug toolbar. No "works on my machine." The rule I live by now: You cannot fix what you cannot see in production. Not in dev. Not in staging. In production. #django #python #postgres #backend #softwareengineering
To view or add a comment, sign in
-
-
People always ask: what's your stack? So here it is — every tool I used and why I chose it. For the backend, I went with FastAPI on Python 3.11. I've used Flask before, but FastAPI's async support and automatic API docs saved me hours of boilerplate. Pair that with SQLAlchemy 2.0's async ORM and I had a clean, typed backend in days not weeks. For the database, Neon — serverless PostgreSQL. The free tier is genuinely useful, it scales automatically, and the connection string just works. No managing servers. No backups to configure. For Redis, Upstash. Same idea — serverless, no infrastructure to babysit. It handles the task queue and response caching. Celery manages all the background work — scraping, classifying, generating AI solutions. It runs on a schedule, completely separate from the web server. On the frontend, Next.js 14 with TypeScript. The App Router took some getting used to, but server components made the initial page loads much faster. TanStack React Query handles all the data fetching with caching and background updates built in. The whole thing deploys automatically. Railway for the backend. Vercel for the frontend. Every push to GitHub triggers both. No DevOps headaches. No manual deployments. Just push and it works. #WebDevelopment #Python #NextJS #FastAPI #DevOps
To view or add a comment, sign in
-
-
From CLI to Web — My Journey Building a SmartCashier System I recently completed a project where I transformed a simple CLI-based cashier system into a fully functional web application using Flask and SQLAlchemy. At first, the system was built using Python dictionaries and lists — it worked, but had major limitations: ❌ No persistent data ❌ No user interface ❌ Not scalable So I decided to take it to the next level. I rebuilt the entire system into a web app with: - User Authentication (Register & Login) - Full CRUD Product Management - Transaction System with real business logic - Transaction History tracking - Database integration using SQLAlchemy - One of the most exciting parts of this project was implementing real-world cashier logic: ✔️ Dual discount system (quantity & price-based) ✔️ Automatic best discount selection ✔️ Stock validation before purchase ✔️ Auto stock update after transaction ✔️ Tax calculation (PPN 11%) I also prepared the project for deployment using Render, making it production-ready. What I learned from this project: - How to transform a CLI app into a web system - How to design backend architecture with Flask - How to use ORM (SQLAlchemy) effectively - How to implement real business logic - How to deploy applications This project helped me understand how real systems are built — not just code, but logic, structure, and scalability. 🔗 GitHub: https://lnkd.in/gGUwgSQF 📖 Medium Article: https://lnkd.in/garJqQ7D If you're learning programming, here's my advice: Start simple → Build CLI → Move to Web → Deploy 🚀 #Python #Flask #SQLAlchemy #WebDevelopment #FullStack #BackendDeveloper #PortfolioProject #LearningJourney #DataScience
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
flask template: https://github.com/Aditya-Attrish/flask-template