🚀 Just shipped my second backend project — a production-grade Task Management API! 🔗 Live: https://lnkd.in/g_MYFbxs 🐙 GitHub: https://lnkd.in/gppGbTyC ⚙️ What I built: → JWT authentication (signup + login) → Full CRUD on tasks → Role-based access control (user / admin) → Paginated task listing → Each user sees only their own data → Dockerized for local + production → Deployed on Render with Supabase PostgreSQL 🛠️ Tech Stack: FastAPI · PostgreSQL · SQLAlchemy · Pydantic v2 · JWT · Bcrypt · Docker · Render · Supabase This project taught me how real backend systems are structured — not just "make it work" but make it secure, scalable, and deployable. #FastAPI #Python #Backend #Docker #PostgreSQL #JWT #OpenToWork #BuildInPublic #100DaysOfCode
Task Management API Built with FastAPI and PostgreSQL
More Relevant Posts
-
📦 Day 36 #90DaysOfDevOps 🚨 From “It works on my machine” → to a fully Dockerized 2-tier app Today’s learning hit different. I built and containerized a Flask + MySQL application, and what looked simple at first quickly turned into a deep dive into how things actually work behind the scenes. 💥 It started with a simple goal: “Run my Flask app inside Docker.” But then… ❌ My app couldn’t connect to MySQL → Turns out, localhost inside a container ≠ my machine ❌ Build kept failing with pkg-config not found → Learned that some Python packages (like mysqlclient) need system-level dependencies ❌ Even after fixing everything, app still crashed → MySQL wasn’t “ready” when Flask started 🔍 Here’s what I implemented to fix it: ✅ Created a custom Docker network for container communication ✅ Replaced localhost with service name (db) ✅ Installed required system packages (gcc, libmysqlclient-dev, pkg-config) ✅ Added healthchecks using mysqladmin ping ✅ Used depends_on with service_healthy to ensure proper startup order ✅ Secured the container by using a non-root user ✅ Managed configs using environment variables ⚙️ Final setup: Flask app running in one container MySQL running in another Both connected via Docker network Fully reproducible with Docker 📦 Docker Hub (pull & run): https://lnkd.in/gt3749CC 📁 GitHub: https://lnkd.in/gZ5g623i 💡 Biggest takeaway: Containerization is not just about “Docker build & run” — it’s about understanding: networking dependencies startup timing and debugging real failures This project felt like a real DevOps scenario rather than just a tutorial. If you’ve faced similar issues while working with Docker, would love to hear your experience 👇 #Docker #DevOps #Flask #MySQL #LearningInPublic #BuildInPublic #OpenToWork #dockerproject #TrainWithShubham
To view or add a comment, sign in
-
-
Most of my backend learning while building this API monitoring tool didn’t come from Go — it came from PostgreSQL. Another update from building in public. While working on latency tracking and incident systems, I ended up using some surprisingly powerful SQL features. DATE_TRUNC helped me group timestamps by hour so graphs don’t turn into noise. FILTER made it easy to count only successful checks without extra queries. COALESCE handled missing data cleanly, and BOOL_OR helped detect if any incident is still ongoing. Also used make_interval to avoid string hacks, EXTRACT(EPOCH) for duration in seconds, and proper indexing to keep queries fast. Didn’t expect SQL to carry this much weight in a backend system. follow along — more coming this week #golang #buildinpublic #backend #postgresql
To view or add a comment, sign in
-
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
For a thousand and one reasons, its been ages since I posted here, And now everyone is saying "build in public",, well I am not giving any guarantees but I will try.... (I did this one in private though, I am only making a post about it) So as a young boy who cares about developer experience, I built a CLI tool that improves your developer experience as a Rust dev. It is called supabase-rust-gen, a binary crate that enables you generate type-safe Rust structs from your Supabase database schema. Like supabase-js type generation, but for the Rust ecosystem. Manually writing Rust structs for your Supabase tables is tedious and error-prone. Column names change, types drift, and nullable fields get missed. supabase-rust-gen eliminates this by: - Connecting directly to your Supabase project's PostgREST endpoint - Reading the OpenAPI spec to understand your exact schema - Generating idiomatic Rust with proper Serde derives - Handling edge cases like JSONB, arrays, nullable fields, and PostgreSQL types So that being said, its been published on crates.io Link Here: https://lnkd.in/eWYX5pDZ Repo: https://lnkd.in/epfp6wsD Test it out, build projects and give feedback...
To view or add a comment, sign in
-
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
Built a recommendation engine for a large-scale Odoo setup recently, and the hardest part wasn’t the math, it was making it survive production. A few choices that mattered: * Used PostgreSQL advisory locks so heavy ALS training jobs don’t collide with each other. Because nothing says “fun” like two background workers trying to do the same expensive thing at once. * Used ALS collaborative filtering with sparse CSR matrices so the model could scale without treating memory like an unlimited resource. * Skipped slow ORM writes for bulk upserts with "execute_values", because millions of rows and "create()" are not friends. * Added an embeddings + cosine similarity fallback for new products, so the system can still recommend items even when sales history is basically nonexistent. The model matters, but production-readiness mattered more here: concurrency control, fast writes, low memory usage, and a fallback for cold-start cases. #Odoo #PostgreSQL #Python #RecommendationSystems #BackendEngineering
To view or add a comment, sign in
-
I hit a wall this week. My backend for HALT : The App kept crashing during deployment. Errors stacked up, things broke unexpectedly, and honestly , I knew something was off. Not just in the code, but in my approach. So instead of blindly fixing bugs… I paused building and switched to learning mode. For the past few days, I went back to fundamentals and that decision changed everything. Here’s what I revisited (properly this time): • How packages actually work in our system (not just npm install and forget) • What a server really is — beyond “it runs on port 3000” • HTTP methods — GET, POST, PATCH, DELETE — and when to use each • API keys — different types and their roles in authentication & security • Debugging APIs using Postman (this alone saved me hours) • Middlewares — why they exist and how they control request flow • Status codes — 200, 201, 404… understanding what the server is trying to say • Params & request handling — how data actually flows through endpoints And one big decision: ➡️ I moved back to MongoDB from Supabase to strengthen my backend fundamentals. What I realized: I wasn’t failing because backend is “hard” — I was failing because I was skipping depth for speed. Building fast feels productive… But understanding deeply is what actually scales. This wasn’t a setback. It was a reset. Now I’m not just writing backend code — I actually understand what’s happening under the hood. And trust me, that confidence hits different. #BackendDevelopment #WebDevelopment #LearningInPublic #MongoDB #NodeJS #APIs #BuildInPublic
To view or add a comment, sign in
-
I spent my last project acting as an API key collector rather than a software engineer. I thought "modern" meant a different tool for every line of code. I was building a fragile, distributed web of free tier services before I even had a single user. Then I had an enlightenment: PostgreSQL doesn't just replace other databasesit can replace half your backend. For the small scale projects I build, Postgres is the ultimate "Swiss Army Knife": Replacing MongoDB? Just use JSONB. Replacing Redis? Simple indexing is often just as fast. Replacing Pinecone? Use pgvector for AI embeddings. Replacing Middleware? Use Row-Level Security (RLS) and auto-generated APIs. Complexity isn't a badge of honor; it's technical debt. I was over-engineering the "little things" and killing my momentum. Now, I’m sticking to the "boring" stack. Because when you use Postgres to its full potential, you don't just simplify your data, you incinerate your boilerplate. It's okay to try things out, but when it comes to prod it's better to stick with what's really appropriate. Start simple. Build faster. #SoftwareEngineering #PostgreSQL #TechStack #Coding #WebDev #BackendDevelopment #Programming
To view or add a comment, sign in
-
-
🚀 From Idea to API: Building a FastAPI CRUD App with Neon (PostgreSQL) I recently built a backend CRUD application using FastAPI connected to a cloud PostgreSQL database (Neon). This project helped me understand how real-world backend systems handle data efficiently. 🔹 What I Built A complete Student Management API where users can: • Create student records • Read all / single student data • Update student information • Delete records 🔹 Backend Stack • FastAPI (high-performance Python framework) • SQLAlchemy (ORM for database handling) • PostgreSQL (Neon cloud database) 🔹 Key Learnings • Structuring scalable backend architecture • Connecting FastAPI with a cloud database (Neon) • Writing clean CRUD operations using SQLAlchemy • API testing using Swagger Docs 🔹 What’s Next I’m planning to extend this into a full-stack app by integrating it with Next.js and adding authentication (JWT). This is just a small step towards building production-level systems like SaaS platforms 🚀 I’d appreciate your feedback and suggestions! #SMIT #FastAPI #Python #PostgreSQL #BackendDevelopment #WebDevelopment #CRUD #Neon #LearningJourney
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development