Leveling up the backend journey! After mastering core REST APIs and demystifying Spring Security with JWTs in my previous projects, it was time to push the boundaries with complex data relationships and a brand-new database. What happened: I recently built and designed "Quill" ~ a fully-fledged blogging and article publishing platform. While I had a blast designing the frontend to bring the platform to life, my true mission was under the hood: architecting a scalable backend capable of handling posts, user interactions, and media. What I learned: This project was a massive step up in complexity. I engineered the backend with Java and Spring Boot, but this time, I leveled up my tech stack: 🔹 PostgreSQL Debut: This was my first time using Postgres, and honestly? It was incredibly fun! Transitioning to it gave me a fresh perspective on database management, and I really enjoyed leveraging its robustness for this project. 🔹 Complex Data Relationships: I went deep into Spring Data JPA, mapping out complex One-to-Many and Many-to-Many relationships across Users, Posts, Comments, and Tags without compromising query performance. 🔹 Multipart File Handling: I stepped out of pure text/JSON data and implemented a custom Image Controller to securely handle, store, and serve multipart file uploads for article cover images. 🔹 Security at Scale: I successfully carried over the custom JWT authentication architecture from my previous "LetsGo" project, applying it to a much larger surface area to protect diverse endpoints, user roles, and content ownership. Key takeaway: Building "Quill" taught me that a well-structured database schema is the heartbeat of any good application. Moving to Postgres and handling complex table relations proved that when your backend architecture is solid, scaling the rest of the application feels incredibly natural. Github Link - https://lnkd.in/gxz9a7eY What was your experience like when you first switched databases, or when you first tried PostgreSQL? Let me know in the comments! 👇 #Java #SpringBoot #PostgreSQL #BackendDevelopment #DatabaseDesign #RESTAPI #ProjectBasedLearning #SoftwareEngineering #LearningInPublic #DeveloperJourney
More Relevant Posts
-
Sunday ship: @perryts/postgres is out. A pure-TypeScript Postgres driver that speaks the wire protocol directly. No libpq. No native addons. No FFI. Why another one? → Every Node Postgres driver worth using wraps libpq or ships a platform-specific .node addon. That's a dead end for AOT - Perry compiles TypeScript to a statically-linked native binary via LLVM, and there's no V8 at execution time to host a C addon into. → The Perry-native Postgres GUI this drives (Tusk) needed things most drivers throw away for ergonomics: exact numeric (not float), full column metadata (attnum, tableOid, typmod), structured errors with every documented ErrorResponse field, and raw row bytes on demand. So: one TypeScript source, three runtime targets. → Node.js 22+ → Bun 1.3+ → Perry AOT → 4.6 MB static binary, 1.8 MB RSS, single-digit-ms cold start Honest performance story: V8's JIT beats Perry-native on per-query wall time in a warm long-running process. Perry wins everywhere else — cold start, memory footprint, deploy size, and the platforms Node and Bun can't reach at all (CLIs, serverless cold paths, mobile, embedded Linux). What's in the box: SCRAM-SHA-256 / MD5 / cleartext auth, TLS with mid-stream upgrade, simple + extended query, 20 type codecs, exact numeric via a Decimal wrapper, structured PgError, cancel protocol, LISTEN/NOTIFY, connection pool, transactions, libpq URLs, PG* env vars. MIT. Feedback welcome. https://lnkd.in/dyeDTJG7
To view or add a comment, sign in
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
🚨 I thought this was a simple array problem… until binary search showed up. Day 28 of my Backend Developer Journey — and today was about 👉 combining logic + optimization 🧠 LeetCode Breakthrough Solved a problem using Binary Search + Reverse Thinking 💡 What clicked: → Reverse one array to simplify comparison → Apply upper_bound (binary search) → Maximize distance efficiently ⚡ The real trick: 👉 Don’t solve the problem as it is… 👉 Transform it into something easier 🔍 Key Insight Instead of brute force: 👉 Preprocess data (reverse array) 👉 Use binary search to reduce complexity ⚡ From O(n²) → O(n log n) 🔗 My Submission: https://lnkd.in/gF3_5BrW ☕ Spring Boot Learning 🐘 PostgreSQL + DBeaver Setup Today I stepped into real backend setup 👇 👉 Installed PostgreSQL locally 👉 Connected database using DBeaver 👉 Explored tables, queries, and DB structure ⚡ Why this matters 💡 Backend isn’t complete without DB understanding 👉 Writing code is one part 👉 Managing real data is the real game 🔥 Big Win Today ✅ Successfully connected Spring Boot to PostgreSQL ✅ Understood how applications talk to databases 🧠 The Shift 👉 Optimization comes from thinking differently 👉 Tools like DB clients make dev life easier 👉 Backend = Code + Database + Efficiency 📈 Day 28 Progress: ✅ Learned binary search application deeply ✅ Set up real database environment ✅ Took one step closer to production-level backend 💬 When did you first realize backend is more than just writing APIs? 👇 #100DaysOfCode #BackendDevelopment #SpringBoot #Java #PostgreSQL #LeetCode #CodingJourney
To view or add a comment, sign in
-
-
156 SQL migrations and no backend server. That's what the mydba.dev architecture looks like after a year of development. Zero backend application code. Every API endpoint is a PostgreSQL function. The stack is almost absurdly simple: • React + TypeScript frontend (Vercel) • PostgREST auto-generates REST endpoints from the database schema • Clerk JWTs validated via JWKS • Row-level security handles authorization • A Go collector writes metrics directly to PostgreSQL Adding a new API endpoint means writing `CREATE FUNCTION` in a SQL migration file. Not a route handler. Not a controller class. Just SQL. 𝗪𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗿𝗲𝗮𝗹𝗹𝘆 𝘄𝗲𝗹𝗹: Deployment simplicity. There's no backend to deploy, scale, or monitor. The frontend ships via `git push`. Database changes ship via migration files. That's the entire deployment process. Performance is excellent. PostgREST is fast, and PostgreSQL functions with proper indexes are fast. No ORM overhead, no serialization layers, no N+1 query problems. The database IS the truth. 𝗪𝗵𝗮𝘁'𝘀 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝗵𝗮𝗿𝗱: Debugging SQL functions is painful compared to stepping through Python or Go. Stack traces are cryptic. Testing is awkward -- you're essentially writing integration tests against a real database. Schema migrations on compressed TimescaleDB hypertables are a special kind of adventure. You can't just ALTER TABLE casually when you have columnar compression enabled. I've built patterns around it (rename tables, security-barrier views, careful migration ordering), but it's complexity that a normal backend wouldn't have. There's no middleware layer. Cross-cutting concerns like request logging, rate limiting, and input validation all need creative solutions. Some of those solutions are elegant. Some are ugly. All of them live in SQL. Would I do it again? Absolutely. But I'd invest in better migration tooling earlier. And I'd accept from day one that some things are just harder in SQL -- and that's a worthwhile tradeoff for the simplicity you get everywhere else. Anyone else running a PostgREST-only architecture in production? I'd love to compare notes. #PostgreSQL #PostgREST #BuildingInPublic #Architecture #SoftwareEngineering
To view or add a comment, sign in
-
moduler-framework just got bigger. The latest update brings PostgreSQL support to the framework — so now you're not locked into MongoDB. From day one, moduler-framework was built to scaffold production-ready Node.js + TypeScript backends in one command. But I kept getting the same question: "What if I'm using PostgreSQL?" Now you don't have to ask. What's new in this release: → PostgreSQL support — pre-wired connection out of the box → Code improvements across the framework for cleaner, more maintainable output → Still one command to scaffold your entire backend npx moduler-framework <project-name> Whether you're a MongoDB dev or a PostgreSQL dev — moduler-framework has you covered. 📦 npm: https://lnkd.in/dcecwNFA 💻 GitHub: https://lnkd.in/dJf7Hfqj Open source. Contributions welcome. Let's keep building. #NodeJS #TypeScript #PostgreSQL #MongoDB #NPM #OpenSource #BackendDevelopment #BuildInPublic
To view or add a comment, sign in
-
🚀 Built a Spring Boot + MongoDB CRUD Project for Java Developers Long back I've worked on a backend project that demonstrates how to implement simple CRUD operations in MongoDB using Spring Boot. 🔹 What this project solves It shows how to combine: ✅ MongoRepository for quick CRUD operations ✅ MongoTemplate for custom dynamic queries This helps developers understand when to use simple repository methods versus flexible query-based access in real-world applications. 🔹 Tech stack used • Java • Spring Boot • Spring Data MongoDB • MongoRepository • MongoTemplate • Maven • Lombok 🔹 Who can benefit This project is useful for: ✔ Java developers learning MongoDB ✔ Backend engineers building REST APIs ✔ Developers preparing for Spring Boot interviews ✔ Anyone exploring NoSQL with Java If you're working with Spring Boot and want a practical MongoDB example, check it out and feel free to contribute. https://lnkd.in/dYV4v5TF #Java #SpringBoot #MongoDB #BackendDevelopment #OpenSource #JavaDeveloper #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Built a Complete Spring Boot REST API with CRUD Operations I’m excited to share my latest project where I developed a RESTful API using Spring Boot and MySQL. This project demonstrates full CRUD functionality and follows a clean layered architecture. 🔧 Tech Stack: • Spring Boot • Spring Data JPA • MySQL • REST API • RAPID API (Testing) 📁 Architecture: Client → RestController → Service → Repository →Entity-> Database 📌 Features Implemented: ✅ Create Student (POST) ✅ Get All Students (GET) ✅ Get Student By ID (GET) ✅ Update Student (PUT) ✅ Delete Student (DELETE) 🔗 API Endpoints: POST /students GET /students GET /students/{id} PUT /students/{id} DELETE /students/{id} This project helped me understand: • REST API design • Layered architecture • Database integration using JPA • Testing APIs using RAPID API CLIENT Looking forward to feedback and suggestions! #SpringBoot #RESTAPI #Java #MySQL #BackendDevelopment #SpringDataJPA #Learning #CRUD #Developer
To view or add a comment, sign in
-
Indexes in a DB table can save a lot of time when you execute queries with millions of rows. This little guide is very useful in selecting and analyzing slowness in your queries.
After years of building backend systems with Java, Kotlin, and PostgreSQL, I've seen one pattern repeat itself more than any other: A query works fine in dev. It becomes a nightmare in production. The table went from 10K rows to 10M and nobody thought about indexes. I just published a deep dive into database indexes: how they actually work, when to use them, and how to manage them without taking down your production database. Here's what's inside: - How B-trees work under the hood (and why column order in composite indexes matters) - When to use GIN, GiST, BRIN, partial, and hash indexes - How to read EXPLAIN ANALYZE output and find slow queries with pg_stat_statements - Why CREATE INDEX CONCURRENTLY is non-negotiable in production - Common pitfalls: functions on indexed columns, implicit type casts, LIKE queries - A full Spring Boot / Kotlin integration guide with Flyway migrations Indexes are one of the highest-leverage tools a backend engineer has. A well-placed index can turn a 30-second query into a 3ms one. But they're not free they slow down writes, consume storage, and can confuse the query planner if you go overboard. The goal of this article is to give you a mental model that makes you dangerous with indexes for the rest of your career. Read it here: https://lnkd.in/e9dY-dE3 hashtag #PostgreSQL #BackendEngineering #Java #SpringBoot #SoftwareEngineering #DatabasePerformance
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
🚀 Leveling Up My Spring Boot Skills! I’ve been diving deep into building robust back-end systems, and I’m excited to share a snippet of my latest Employee Management System built with Spring Boot and MySQL! 💻✨ In this project, I focused on creating a seamless CRUD (Create, Read, Update, Delete) workflow. From handling complex service logic in Java to ensuring data persistence with MySQL, it’s been a great journey refining my understanding of the Spring ecosystem. Key Highlights: Back-end: Developed using Spring Boot, leveraging the @Service layer for clean business logic. 🏗️ Database: Integrated MySQL for reliable data storage and management. 📊 API Testing: Used Postman to rigorously test REST endpoints and ensure everything runs smoothly. 📮 Tools: Built with Spring Tool Suite (STS) for an efficient development environment. 🛠️ Check out the screen recording below to see the API in action—from updating employee details to verifying the changes directly in the MySQL Command Line! 🎥👇 Always looking for ways to improve and learn more about Java Development and Full-Stack architectures. Would love to hear your thoughts or tips on optimizing Spring services! #SpringBoot #JavaDevelopment #BackEnd #MySQL #RestAPI Anand Kumar Buddarapu Saketh Kallepu
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Impressive work Sankalp Tiwari