Just shipped: SNIP — A URL Shortener built with Spring Boot & PostgreSQL. Ever wondered how bit.ly or go/links work internally? I built one from scratch to find out. Where it's used in the real world: Shortening links for social media & SMS Email campaigns to track click-through rates. Internal enterprise tools like go/handbook. QR codes on product packaging. What I built: Shorten any URL with custom expiry & click tracking PostgreSQL on Render for persistent production storage . Dockerized deployment with a futuristic terminal UI Lesson learned: H2 works great locally but wipes all data on every cold start in production. Migrating to PostgreSQL fixed it permanently. GitHub: https://lnkd.in/gDbN8n4H Live: https://lnkd.in/g_-D5-dz #SpringBoot #Java #PostgreSQL #Docker #BackendDevelopment
Building a URL Shortener with Spring Boot & PostgreSQL
More Relevant Posts
-
Timeouts (The Small Setting That Saves Your System) --- Built:- A service calling multiple downstream APIs to fetch and aggregate data. --- Problem I faced:- Everything worked fine… until one dependency slowed down. Then suddenly: Requests started hanging Thread pool got exhausted API response time shot up Entire service became slow All because one service was taking too long. --- How I fixed it:- The issue was missing timeouts. Requests were waiting indefinitely. Fixes applied: Added strict timeouts for all external calls Used fallback responses where possible Combined with circuit breaker for failing services Monitored slow calls with proper logging Now: Slow services don’t block everything System fails fast instead of hanging Overall stability improved --- What I learned A slow dependency is sometimes worse than a failed one. At least failures are quick. Slow calls quietly kill your system. --- Question:- Do your API calls have proper timeouts… or are they waiting forever without you noticing? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
Level Up Your Backend: REST APIs with Spring Boot 4.0.5 & PostgreSQL Building a REST API is easy, but building one the right way is what separates the juniors from the pros. 🛠️ In my latest tutorial on Unshakable with Cliff, I walk through the modern "Clean Code" stack for Java developers. We aren't just writing code; we’re architecting a solution. What we’re implementing in this guide: ✅ Spring Boot 4.0.5: Staying on the bleeding edge of the ecosystem. ✅ Lombok Power: Using @Data, @NoArgsConstructor, and @AllArgsConstructor to eliminate boilerplate. ✅ JPA Magic: Leveraging built-in queries to fetch data from PostgreSQL without writing a single line of SQL. ✅ Clean Controllers: Creating a clean endpoint to fetch users and testing it live. If you’re a developer looking to move from "Junior" to "Intermediate," understanding how these pieces fit together is crucial. 📺 Watch the full tutorial here: https://lnkd.in/d_XJAP3p I’m currently at 935 subscribers and pushing for that 1,000 subscriber milestone! If you find value in these deep dives into the "Cliff Stack," please hit that Subscribe button and join the community. 🤝🔥 #Java #SpringBoot #BackendDevelopment #PostgreSQL #Lombok #SoftwareEngineering #CodingTutorial #UnshakableWithCliff #TechCommunity
Build & Connect Your First REST API: Spring Boot 3, JPA, and PostgreSQL Guide
https://www.youtube.com/
To view or add a comment, sign in
-
I usually just spin up MongoDB and call it a day. However, I wanted to explore how databases work under the hood, so I built a mini-database from scratch using Node.js and a plain .txt file. This project initially seemed a bit crazy, but it forced me to learn about: - Using Node streams to prevent memory crashes - Safely updating and deleting records in a flat file - The importance of basic indexing as a lifesaver You don't truly understand a tool until you attempt to build a basic version of it yourself. I wrote a quick breakdown of the code and what I learned. #Nodejs #Backend #SystemDesign https://lnkd.in/gcdZ2yQj
To view or add a comment, sign in
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
PostgreSQL 19 is getting an in-core `REPACK` command with a `CONCURRENTLY` option, which puts most of what `pg_repack` has been doing as an extension into the backend proper. (Worth confirming the exact scope against the landing commit before declaring the extension obsolete — but the direction of travel is clear.) The obvious question is whether you still need `pg_repack`. The answer, as always, is "it depends," but the set of things it depends on is narrower than it used to be. The locking model is the interesting part: a properly integrated in-core implementation can cooperate with the rest of the backend in ways an extension cannot, particularly around catalog swap semantics, partition handling, and TOAST relation behavior. What `pg_repack` users have learned the hard way — the failure modes around interrupted runs, the specific gotchas with foreign keys and replication slots, the surprise behaviors on partitioned tables — do not simply evaporate because the code moved into core. Some will. Others will show up in slightly different shapes and catch the people who assume the extension's lessons no longer apply. We've been running `pg_repack` on client production clusters for years. Moving repack in-core changes the playbook. It does not eliminate the playbook. If your shop runs `pg_repack` on schedule and you want to know what your PG19 upgrade looks like with and without it, we are at pgexperts.com. https://lnkd.in/gTp88MRZ
To view or add a comment, sign in
-
🚀 Built a Core Feature of My Gmail Clone (Spring Boot + PostgreSQL) Today I completed an important milestone in my backend project — implementing the Send Email API and making it fully functional. 🔹 What I worked on: Designed the Email data model (sender, receiver, subject, body, timestamp) Built a REST API to simulate sending emails Implemented validation for sender and receiver Integrated PostgreSQL for storing email data Structured clean service and controller layers 🔹 Challenges I faced: Confusion between DTO design and response structure (initially tried to treat response like a list instead of a single object) Repository generic type mismatch (incorrect order in JpaRepository) Entity and database sync issues after adding new fields (timestamp, senderId, receiverId) Handling sender logic correctly (understanding it should come from authentication, not request) Debugging save errors due to null or invalid IDs Designing proper API structure (return type, endpoint naming) 🔹 Key Learning: Understanding how real systems work behind the scenes — especially how a simple action like “send email” involves proper validation, data handling, and clean architecture. 🔹 What’s next: Inbox API (fetch received emails) Sent mail functionality Enhancing the system with better design patterns Step by step, turning this into a real-world backend system 💪 #Java #SpringBoot #BackendDevelopment #PostgreSQL #APIDevelopment #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
Built a REST API named it taskion. Here's what it actually does, What you'll see: → Validation rejecting bad input before it reaches the database → JWT authentication flow : register, login, get token → Tasks protected behind auth : try without token, get blocked → Pagination metadata in the response → Rate limiting kicking in on the auth endpoint Every behaviour you see was something I learned, implemented, broke, and fixed. Still learning. Still building. 🔍 Try it yourself: https://lnkd.in/d3qpTqMz #BackendDevelopment #NodeJS #PostgreSQL #Swagger #JWT #buildinpublic #Learning #HandsOnLearning #SoftwareDevelopment
To view or add a comment, sign in
-
🚀 Built Core Email Features in My Gmail Clone (Spring Boot + PostgreSQL) Excited to share that I’ve completed the core backend features of my Gmail-like system! 🔹 What I’ve implemented: Send Email API Inbox (fetch all received emails) Sent Mail (fetch all sent emails) Clean DTO-based architecture Database integration using PostgreSQL 🔹 Key Concepts I Learned: Designing proper data models for real-world systems Difference between internal IDs vs user-facing emails How to structure service, repository, and controller layers Efficient data fetching using query-based filtering (instead of loading everything) Using streams & map for clean data transformation 🔹 Challenges I Faced: Confusion between single response vs list response Repository generic type mistakes Entity and DB sync issues after schema changes Understanding why backend uses IDs instead of emails Designing proper API endpoints and request/response flow Avoiding inefficient approaches like findAll() + manual filtering 🔹 What’s Next: Implement Delete Email (starting with hard delete) Upgrade to real-world flow (email-based input instead of IDs) Add authentication-based sender handling Step by step, building this into a real backend system 💪 #Java #SpringBoot #BackendDevelopment #PostgreSQL #APIDesign #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
Sometimes your system isn’t slow because of heavy logic. It’s slow because it’s waiting. Waiting for: another service a database an external API And while it waits, threads just sit there doing nothing. --- This is where Async Processing helps The idea is simple: Don’t block. Do the work later. --- What this looks like Instead of doing everything in one request: User places an order System saves order immediately Email is sent later Notification is processed in background The user doesn’t wait for everything. --- How it’s usually done Background jobs Message queues (Kafka, RabbitMQ) @Async in Spring Boot You move non-critical work out of the main flow. --- Why this matters Without async: Requests take longer Threads stay blocked System struggles under load With async: Faster response times Better scalability Smoother user experience --- Real-world example When you upload a file: You don’t wait for processing You get a response quickly Processing happens in background --- Trade-offs Async adds complexity: Harder to debug Requires retry handling Failures are not immediate --- Simple takeaway Not everything needs to happen right now. --- If your system is slow, how much of that work actually needs to be done synchronously? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
Just shipped a distributed job scheduler from scratch in TypeScript. It lets developers register cron jobs via a REST API and the system handles the rest firing jobs on schedule, retrying failures, and recovering missed jobs after downtime. Under the hood: PostgreSQL as the source of truth for job definitions BullMQ + Redis as the execution engine SELECT FOR UPDATE SKIP LOCKED to prevent duplicate job execution across multiple instances Missed job recovery on startup with configurable policy (run immediately or skip) Full execution history tracked per job The interesting part wasn't the scheduling it was making it reliable. What happens when two servers try to pick up the same job at the same time? What happens when the server crashes during a job run? Those are the problems worth solving. Stack: Bun, TypeScript, Hono, PostgreSQL, BullMQ, Redis, Effect-ts https://lnkd.in/e-_wwG-s
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development