Building a URL Shortener sounds simple until you have to handle database collisions and clean API redirects. 🚀 Hey LinkedIn family! 👋 Saif here. I recently wrapped up a new Backend project: a Production-Ready URL Shortener API. My goal wasn't just to make it work, but to understand how to build scalable, containerized backend systems. The Features (What it does) Short-Code Generation: Custom logic to create unique, collision-resistant URLs. Smart Redirects: Handling 302 redirects with real-time click tracking. Analytics: Dedicated endpoints to monitor URL performance. URL Management: Ability to deactivate links on the fly. The "Under the Hood" (The Deep Tech) This is where the real learning happened. I didn't just write Python; I built a mini-infrastructure: FastAPI & Pydantic: For strict data validation and lightning-fast performance. PostgreSQL & SQLAlchemy: Managing relational data with clean ORM patterns. Alembic: Handling database migrations (version control for my DB schema). Dockerized Environment: I used Docker to isolate the PostgreSQL environment, managing complex port mappings to avoid host-system conflicts. The Tech Stack 🛠 Backend: FastAPI, Python 3.12 🗄 Database: PostgreSQL, SQLAlchemy (ORM) 🔄 Migrations: Alembic 🐳 Infrastructure: Docker & Docker Compose What’s Next? Currently, it’s running perfectly in my local Docker environment. The next step? I'm moving it to AWS (EC2/RDS) to learn cloud deployment and security groups. Stay tuned—I'll be making the API live in a few days! I'd love to hear your thoughts on the architecture. #Python #FastAPI #BackendDevelopment #Docker #PostgreSQL #SoftwareEngineering #AWS
More Relevant Posts
-
A few weeks ago, I took on an end-to-end Docker project thinking it would be straightforward. I came out thinking completely differently about how software is actually built. Here is what I built: 🐳 A multi-container data architecture, a Streamlit application talking to a PostgreSQL database, orchestrated seamlessly with Docker Compose. The data flow looks like this: → A user uploads a CSV through the Streamlit UI. → SQLAlchemy processes and persists the data into PostgreSQL. → The database serves summary statistics back to the interface. → All of this runs in isolated containers that find each other by name, not by IP. Three things I will never forget from this project: 1️⃣ Your Dockerfile order is a caching strategy, not a style choice. Code changes constantly - so COPY . . goes at the bottom. Dependencies change rarely, so pip install goes at the top. Flip that, and every tiny typo fix costs you a full 3-minute rebuild. 2️⃣ Volumes are what give databases a memory. Containers are ephemeral by design. Without a named volume mounted to /var/lib/postgresql/data, every docker compose down takes your data with it. 3️⃣ POSTGRES_HOST=app-db just works. Docker Compose creates an internal network and handles DNS resolution automatically. Your app never needs to know a container's dynamic IP address. Shoutout to the Data Engineering Community for being part of this space that keeps pushing me to go deeper than just "making things work." That accountability matters. If you are learning Docker or containerizing your first data pipeline, the repo is open. Clone it, break it, learn from it. Links to the code and my full architectural write-up are in the comments! 👇 #DataEngineering #Docker #Data #Python #Streamlit #PostgreSQL #LearningInPublic #SoftwareArchitecture #DevOps #BuildInPublic
To view or add a comment, sign in
-
-
🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
To view or add a comment, sign in
-
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
I recently dedicated a couple of days to building a change-data-capture pipeline from scratch using the AWS free tier. Here's a breakdown of the process: Pipeline Overview: CoinMarketCap API → Python → RDS Postgres → Debezium → Kafka → S3 (JSON) 1. A Python script accesses CoinMarketCap's free-tier API and upserts the top 10 cryptocurrencies into Postgres. 2. RDS Postgres serves as the source of truth, with every INSERT/UPDATE recorded in the write-ahead log. 3. Debezium connects to the WAL via a logical replication slot, converting each row change into a CDC event and publishing it to Kafka. 4. A single-broker Kafka in KRaft mode (without Zookeeper) buffers the events. 5. The Confluent S3 Sink consumes the topic and outputs the events as JSON, creating one file per minute. This entire setup runs on a single t3.micro instance with 1 GB RAM and 1 GB swap, utilizing one IAM role and one bucket, without any managed Kafka or paid tier services. Key Learnings: - On RDS, the master user isn't a superuser and can't create a role WITH REPLICATION. Instead, grant the built-in rds_replication role. This term is crucial, as the documentation covers it, but the error message may lead you astray. - Debezium's default decimal.handling.mode is precise, which emits NUMERIC columns as base64-encoded bytes in your JSON. Change it to string to avoid prices appearing as "YmFzZTY0." - The S3 sink task reports RUNNING before attempting a PutObject. If your IAM policy lacks s3:PutObject on arn:aws:s3:::bucket/* (note the /*), the sink appears healthy until the first rotation, when it fails. Verify PutObject permissions before trusting the task state. - Home WiFi's public IP can rotate unexpectedly. If your EC2 security group is scoped to "my IP" and your ISP gives you a new one overnight, you're locked out until you update the SG. What's next: Phase 2 — add schema validation and move infrastructure to Terraform. Phase 3 — land the S3 data in an open table format so the bucket becomes directly queryable. Demo video is attached. Please watch and let me know your feedback. Github repo link is in the comments.
To view or add a comment, sign in
-
For the past few weeks, I’ve been diving into the FastAPI documentation, practicing core concepts, and sharing my takeaways. Now, I’m applying those learnings to a hands-on build. I recently started building a Library API using 𝗙𝗮𝘀𝘁𝗔𝗣𝗜, Async 𝗦𝗤𝗟𝗔𝗹𝗰𝗵𝗲𝗺𝘆, 𝗔𝗹𝗲𝗺𝗯𝗶𝗰, and 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟. My focus is on understanding the implementation and architecture behind each step, rather than creating new / rushing to a finished product. Here is what I’ve been tackling so far: • Designing async APIs • Setting up async database layers • Managing migrations (handling edge cases like enums) • Structuring backend services cleanly Moving from reading docs to building made me realize how much thought goes into database design and migrations beyond the basic examples. I’ll continue building this step by step to solidify my fundamentals. I’m also exploring opportunities where I can contribute and grow. Working on something similar or have advice on robust backend systems? I’d love to connect. Read the detailed breakdown in my Medium article below! https://lnkd.in/eAvP_ahT #FastAPI #BackendDevelopment #Python #SQLAlchemy #PostgreSQL #Alembic #LearningInPublic #OpenToWork
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
You don’t need a Vector DB. You just need more Postgres. 🐘 Most developers building AI features in Ruby on Rails think their first step is signing up for Pinecone or Weaviate. Stop. Look at your schema.rb first. If you’re already running Postgres, you likely don’t need the overhead of a separate vector database. By using pgvector, you can keep your data "close to the metal" and your architecture lean. Why keeping Vectors in Postgres is a win for Rails devs: ✅ ACID Compliance: Your embeddings live alongside your relational data. No more worrying about a vector existing for a record that was just deleted in your primary DB. ✅ Standard ActiveRecord Flow: With the pgvector-ruby gem, querying for nearest neighbors feels just like a standard scope. No complex third-party APIs or new DSLs to learn. ✅ One Infrastructure to Manage: No new security VPCs, no new billing accounts, and no "syncing" lag between your app data and your search index. The "Close to the Metal" Advantage: In Rails, we love the "Majestic Monolith." Adding a separate vector DB adds latency and complexity. When I’m building RAG (Retrieval-Augmented Generation) for my current projects, I want my context retrieval to be as fast as a primary key lookup. Keeping embeddings in a vector column type inside my existing tables makes the logic seamless. The Reality Check: Yes, if you are scaling to billions of vectors with hyper-complex filtering, a dedicated tool might make sense. But for 90% of AI-native apps being built today? Postgres is more than enough. Keep it simple. Keep it in the Monolith. 💎 #RubyOnRails #Postgres #GenerativeAI #WebDevelopment #Backend #AWS #AIEngineering #BuildingInPublic
To view or add a comment, sign in
-
🚀 Day 4 of “Trying to Become a Backend Developer Without Breaking My Laptop” Today’s episode: The Day I Finally Understood Where My Data Lives 🧠💀 Before today: 👉 “API bana diya bro 😎” After today: 👉 “But… data kaha store ho raha hai?? kaun sambhal raha hai?? why is it not showing???” 😵💫 So I officially entered the chaos arena of: Flask + SQLAlchemy + PostgreSQL + DBeaver And honestly… It started with “this seems easy” and quickly became “why does my database hate me 😭” Here’s what went down👇 🔹 Met SQLAlchemy (ORM) — basically a translator between Python & SQL (but sometimes even the translator gets confused 🤡) 🔹 Connected Flask to PostgreSQL (wrote the DB URI multiple times… still double-checking like it’s an exam 👀) 🔹 Used DBeaver because I needed visual proof that my tables actually exist 😤 🔹 Created tables → suddenly felt like I’m building something real 🏗️ 🔹 Built APIs + performed CRUD operations → Create ✔️ → Read ✔️ → Update ✔️ → Delete ✔️ → Debug… still in progress 🐛😭 💡 Biggest realization today: Backend development is not just coding… It’s literally: 👉 convincing your API, your database, and your brain to agree at the same time 🤯 Also, that one moment… when your API finally hits the database and returns correct data? 🎮 Boss level cleared ✨ Instant happiness unlocked 📈 Slowly upgrading from: “I made a Flask app” to “I actually understand how data is stored & managed” Day 4 done ✅ Confidence +1 📈 Errors +10 🐛😂 Let’s see what Day 5 brings… Hopefully fewer bugs… but let’s be honest 😅 #LearningInPublic #BackendJourney #Flask #SQLAlchemy #PostgreSQL #DBeaver #100DaysOfCode #DeveloperLife
To view or add a comment, sign in
-
-
🔧 #PythonJourney | Day 150 — Debugging Production Issues & Learning Persistence Today was about persistence in the face of challenges. Sometimes the best learning comes from solving problems that don't have obvious solutions. Key accomplishments: ✅ Built complete backend architecture: • 7 fully functional API endpoints • SQLAlchemy ORM with 5 production-grade models • PostgreSQL integration with proper relationships • Redis caching layer configured • Celery async task queue set up • Docker multi-container orchestration ✅ Implemented critical features: • User authentication via API key • URL creation with custom slugs • Click tracking with event logging • Analytics aggregation ready • Soft delete pattern for data preservation • Password-protected URLs with bcrypt hashing • Audit logging for compliance ✅ Database design mastery: • UUID primary keys with proper type casting • Foreign key relationships with cascade deletes • PostgreSQL-specific types (JSONB, INET, UUID) • Index optimization for query performance • Relationship configurations with back_populates ✅ Docker expertise: • Multi-service orchestration (PostgreSQL, Redis, FastAPI, Celery, Celery Beat) • Health checks for service dependencies • Environment-based configuration • Volume management for data persistence What I learned today: → Debugging is a critical skill - sometimes it takes multiple attempts → Small details matter (endpoint ordering, type compatibility) → Persistence pays off - keep trying different approaches → Understanding error messages is half the solution → Building production systems is incremental and iterative Progress summary (Days 143-150): - ✅ Project architecture designed - ✅ SQLAlchemy models created - ✅ FastAPI endpoints implemented - ✅ Docker environment configured - ✅ Database connectivity verified - ✅ Authentication implemented - ✅ Test user creation working - ⏳ Endpoint testing (WIP) The foundation is solid. API endpoints are ready for comprehensive testing with pytest and then deployment to GCP. This journey taught me that backend development is about building reliable, scalable systems piece by piece. Each layer matters - from database design to API routing to Docker orchestration. #Python #FastAPI #Backend #Docker #PostgreSQL #SoftwareDevelopment #CodingJourney #Persistence #Learning
To view or add a comment, sign in
-
-
I spent my last project acting as an API key collector rather than a software engineer. I thought "modern" meant a different tool for every line of code. I was building a fragile, distributed web of free tier services before I even had a single user. Then I had an enlightenment: PostgreSQL doesn't just replace other databasesit can replace half your backend. For the small scale projects I build, Postgres is the ultimate "Swiss Army Knife": Replacing MongoDB? Just use JSONB. Replacing Redis? Simple indexing is often just as fast. Replacing Pinecone? Use pgvector for AI embeddings. Replacing Middleware? Use Row-Level Security (RLS) and auto-generated APIs. Complexity isn't a badge of honor; it's technical debt. I was over-engineering the "little things" and killing my momentum. Now, I’m sticking to the "boring" stack. Because when you use Postgres to its full potential, you don't just simplify your data, you incinerate your boilerplate. It's okay to try things out, but when it comes to prod it's better to stick with what's really appropriate. Start simple. Build faster. #SoftwareEngineering #PostgreSQL #TechStack #Coding #WebDev #BackendDevelopment #Programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development