🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
Marcos Vinicius Thibes Kemer’s Post
More Relevant Posts
-
Built a personal project called 𝗥𝗲𝗲𝗹𝗩𝗮𝘂𝗹𝘁 over the past few weeks and wanted to share what went into it. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗜 𝘄𝗮𝘀 𝘀𝗼𝗹𝘃𝗶𝗻𝗴: I watch a lot of content on Instagram and YouTube about AI tools, open source models, and dev resources. I kept losing track of things I wanted to revisit. 𝗖𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗮𝗻𝗱 𝗯𝗼𝗼𝗸𝗺𝗮𝗿𝗸𝘀 𝗱𝗼 𝗻𝗼𝘁 𝗰𝘂𝘁 𝗶𝘁. So I built a full-stack application where I can save any link, reel, or note and 𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝘁 𝗹𝗮𝘁𝗲𝗿 𝘂𝘀𝗶𝗻𝗴 𝗻𝗮𝘁𝘂𝗿𝗮𝗹 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲. Not keyword search. Meaning-based search. Tech used: Backend — 𝗝𝗮𝘃𝗮 𝟮𝟭 with Spring Boot 3.2, Spring Data JPA, REST APIs Database — 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 with 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿 extension on Supabase Embeddings — 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲 Inference API using sentence-transformers/all-MiniLM-L6-v2 to convert 𝘁𝗲𝘅𝘁 𝗶𝗻𝘁𝗼 𝟯𝟴𝟰-𝗱𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 𝘃𝗲𝗰𝘁𝗼𝗿𝘀 Search — Cosine similarity search using 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿'𝘀 𝗶𝘃𝗳𝗳𝗹𝗮𝘁 𝗶𝗻𝗱𝗲𝘅 𝗧𝗲𝗹𝗲𝗴𝗿𝗮𝗺 𝗕𝗼𝘁 — built into the Spring Boot service, lets me send a URL and get it saved automatically with metadata extracted via Jsoup Frontend — Vanilla HTML, CSS, JS 𝗱𝗲𝗽𝗹𝗼𝘆𝗲𝗱 𝗼𝗻 𝗩𝗲𝗿𝗰𝗲𝗹 Deployment — 𝗗𝗼𝗰𝗸𝗲𝗿𝗶𝘇𝗲𝗱 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝗮𝗽𝗽 𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿 What I learned from actually shipping it: Hugging Face free tier uses a different endpoint than documented. Had to debug a 404 mid-production. Render is IPv4 only so Supabase Direct Connection does not work. Transaction Pooler with stringtype=unspecified in the JDBC URL is the fix. pgvector requires data to exist before the ivfflat index is useful. This project gave me hands-on experience with vector embeddings, semantic search, RAG-adjacent architecture, and end-to-end deployment on free-tier infrastructure. 𝗚𝗶𝘁𝗛𝘂𝗯 𝗹𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. #Java #SpringBoot #SemanticSearch #VectorDatabase #pgvector #HuggingFace #BackendDevelopment #FullStackDevelopment #RAG #GenerativeAI #AIEngineering #PostgreSQL #Docker #SoftwareEngineering #OpenSource
To view or add a comment, sign in
-
🚀 𝗤𝘂𝗶𝘇 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗔𝗣𝗜 – 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 I recently built a backend system for a Quiz Application using modern Python backend technologies. 🔧 𝗧𝗲𝗰𝗵 𝗦𝘁𝗮𝗰𝗸: • FastAPI (High-performance API framework) • SQLAlchemy (ORM for database management) • PostgreSQL (Relational database) • Pydantic (Data validation & schema handling) 📌 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • RESTful API endpoints for questions and choices • One-to-many relationship between Questions and Choices • Secure database session handling with dependency injection • Proper request validation using Pydantic models • Clean and scalable backend architecture 🔗 𝗔𝗣𝗜 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀: • GET /questions/{question_id} → Fetch a specific question • GET /choices/{question_id} → Fetch all choices for a question • POST /questions → Create a question with multiple choices 🧠 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱: • How FastAPI handles async backend development efficiently • Working with SQLAlchemy ORM for relational data modeling • Designing clean backend architecture with separation of concerns • Implementing database relationships and migrations logic 💻 𝗚𝗶𝘁𝗛𝘂𝗯 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: 👉 https://lnkd.in/dHJczetV This project helped me strengthen my understanding of backend development, API design, and database integration. #FastAPI #Python #BackendDevelopment #APIs #SQLAlchemy #PostgreSQL #SoftwareEngineering #LearningByBuilding
To view or add a comment, sign in
-
-
A few weeks ago, I took on an end-to-end Docker project thinking it would be straightforward. I came out thinking completely differently about how software is actually built. Here is what I built: 🐳 A multi-container data architecture, a Streamlit application talking to a PostgreSQL database, orchestrated seamlessly with Docker Compose. The data flow looks like this: → A user uploads a CSV through the Streamlit UI. → SQLAlchemy processes and persists the data into PostgreSQL. → The database serves summary statistics back to the interface. → All of this runs in isolated containers that find each other by name, not by IP. Three things I will never forget from this project: 1️⃣ Your Dockerfile order is a caching strategy, not a style choice. Code changes constantly - so COPY . . goes at the bottom. Dependencies change rarely, so pip install goes at the top. Flip that, and every tiny typo fix costs you a full 3-minute rebuild. 2️⃣ Volumes are what give databases a memory. Containers are ephemeral by design. Without a named volume mounted to /var/lib/postgresql/data, every docker compose down takes your data with it. 3️⃣ POSTGRES_HOST=app-db just works. Docker Compose creates an internal network and handles DNS resolution automatically. Your app never needs to know a container's dynamic IP address. Shoutout to the Data Engineering Community for being part of this space that keeps pushing me to go deeper than just "making things work." That accountability matters. If you are learning Docker or containerizing your first data pipeline, the repo is open. Clone it, break it, learn from it. Links to the code and my full architectural write-up are in the comments! 👇 #DataEngineering #Docker #Data #Python #Streamlit #PostgreSQL #LearningInPublic #SoftwareArchitecture #DevOps #BuildInPublic
To view or add a comment, sign in
-
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
𝗜 𝗯𝘂𝗶𝗹𝘁 𝗮 𝗥𝗔𝗚-𝗯𝗮𝘀𝗲𝗱 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗳𝗿𝗼𝗺 𝘀𝗰𝗿𝗮𝘁𝗰𝗵 🚀 Teams waste hours digging through internal documents for answers. 𝗗𝗼𝗰𝗦𝗲𝗻𝘀𝗲 fixes that — upload documents, ask questions, and get instant, cited answers grounded in your data. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: 1. Upload a document → triggers an async Step Functions pipeline that chunks the PDF, generates embeddings (all-MiniLM-L6-v2 via DJL/PyTorch), and indexes vectors into OpenSearch k-NN. 2. A query is embedded into a vector → OpenSearch performs semantic search to retrieve the most relevant chunks. 3. Retrieved context is sent to Bedrock → generates a grounded answer with inline citations. 𝗞𝗲𝘆 𝗵𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀: - Fully serverless ingestion with S3-staged batching to handle large documents without payload limits - k-NN vector search with cosine similarity for semantic retrieval (no keyword matching) - Bedrock Converse API with tool-use to enforce structured, cited outputs 𝗧𝗲𝗰𝗵: Java 24, Spring Boot 3.5, AWS (Bedrock, OpenSearch, Step Functions, Lambda, S3), DJL/PyTorch, PostgreSQL Sharing the HLD and GitHub link below 👇 GitHub: https://lnkd.in/gsb9ZsV6 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝘄𝗲𝗹𝗰𝗼𝗺𝗲 — what would you do differently? #RAG #Java #SpringBoot #AWS #OpenSearch #Bedrock #VectorSearch #SoftwareEngineering #SystemDesign
To view or add a comment, sign in
-
-
Part 1: Architecture & Real-World System Design Modern backend systems don’t break because of scale alone — they break due to complexity. In a recent redesign, the focus was on simplifying the handling of large, dynamic form data while improving performance, maintainability, and the developer experience. 📊 The shift: 🔹 From rigid column-based schema → flexible JSONB-based storage 🔹 From heavy raw SQL → clean ORM-driven queries 🔹 From scattered APIs → structured, minimal endpoints ⚙️ Architecture Improvements ✔️ Modular design using separate Django applications ✔️ Class-based views for reusable and maintainable logic ✔️ API structuring using Django Ninja Router ✔️ Reduced the number of APIs by consolidating responses ✔️ Strong alignment with frontend for payload and contract design 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Handled 300–500+ fields without schema changes → Simplified debugging with structured payloads → Enabled faster iteration without production risks 🔄 Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard reflects real-time updates 🚀 Outcome ✔️ Reduced schema complexity ✔️ Improved API performance ✔️ Avoided production issues caused by raw queries ✔️ Built a scalable and flexible backend system ✔️ Delivered smoother frontend-backend integration Security handled via JWT-based authentication with proper token flow. Still evolving with improvements in performance, validation, and system design. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT
To view or add a comment, sign in
-
"That's one small step for a man, one giant leap for my backend." 🚀 Today I migrated my Habitual API from SQLite to PostgreSQL. It worked locally - and immediately broke in CI. The technical blocker 🛠️ The heatmap endpoint generates a date range for the last 30 days using a recursive CTE. In SQLite this worked: func.date(func.now(), "-29 days") PostgreSQL doesn't support this modifier syntax. One line, two hours of debugging. I ended up moving the date generation to Python instead of SQL. Cleaner and more portable. The CI struggle 🏗️ Seven commits. Six failures. All named "Little update for CI" - at that point naming stopped mattering (see screenshot 👇). The root cause: my local environment had pinned versions that weren't reflected in requirements.txt. CI pulled newer packages - and everything fell apart. After a few iterations: ✅ 271 tests passed ✅ PostgreSQL running in CI ✅ migrations applied on every push ✅ clean pipeline Next step: Docker and CD. #python #fastapi #postgresql #githubactions #backend
To view or add a comment, sign in
-
-
🔧 #PythonJourney | Day 150 — Debugging Production Issues & Learning Persistence Today was about persistence in the face of challenges. Sometimes the best learning comes from solving problems that don't have obvious solutions. Key accomplishments: ✅ Built complete backend architecture: • 7 fully functional API endpoints • SQLAlchemy ORM with 5 production-grade models • PostgreSQL integration with proper relationships • Redis caching layer configured • Celery async task queue set up • Docker multi-container orchestration ✅ Implemented critical features: • User authentication via API key • URL creation with custom slugs • Click tracking with event logging • Analytics aggregation ready • Soft delete pattern for data preservation • Password-protected URLs with bcrypt hashing • Audit logging for compliance ✅ Database design mastery: • UUID primary keys with proper type casting • Foreign key relationships with cascade deletes • PostgreSQL-specific types (JSONB, INET, UUID) • Index optimization for query performance • Relationship configurations with back_populates ✅ Docker expertise: • Multi-service orchestration (PostgreSQL, Redis, FastAPI, Celery, Celery Beat) • Health checks for service dependencies • Environment-based configuration • Volume management for data persistence What I learned today: → Debugging is a critical skill - sometimes it takes multiple attempts → Small details matter (endpoint ordering, type compatibility) → Persistence pays off - keep trying different approaches → Understanding error messages is half the solution → Building production systems is incremental and iterative Progress summary (Days 143-150): - ✅ Project architecture designed - ✅ SQLAlchemy models created - ✅ FastAPI endpoints implemented - ✅ Docker environment configured - ✅ Database connectivity verified - ✅ Authentication implemented - ✅ Test user creation working - ⏳ Endpoint testing (WIP) The foundation is solid. API endpoints are ready for comprehensive testing with pytest and then deployment to GCP. This journey taught me that backend development is about building reliable, scalable systems piece by piece. Each layer matters - from database design to API routing to Docker orchestration. #Python #FastAPI #Backend #Docker #PostgreSQL #SoftwareDevelopment #CodingJourney #Persistence #Learning
To view or add a comment, sign in
-
-
Building a URL Shortener sounds simple until you have to handle database collisions and clean API redirects. 🚀 Hey LinkedIn family! 👋 Saif here. I recently wrapped up a new Backend project: a Production-Ready URL Shortener API. My goal wasn't just to make it work, but to understand how to build scalable, containerized backend systems. The Features (What it does) Short-Code Generation: Custom logic to create unique, collision-resistant URLs. Smart Redirects: Handling 302 redirects with real-time click tracking. Analytics: Dedicated endpoints to monitor URL performance. URL Management: Ability to deactivate links on the fly. The "Under the Hood" (The Deep Tech) This is where the real learning happened. I didn't just write Python; I built a mini-infrastructure: FastAPI & Pydantic: For strict data validation and lightning-fast performance. PostgreSQL & SQLAlchemy: Managing relational data with clean ORM patterns. Alembic: Handling database migrations (version control for my DB schema). Dockerized Environment: I used Docker to isolate the PostgreSQL environment, managing complex port mappings to avoid host-system conflicts. The Tech Stack 🛠 Backend: FastAPI, Python 3.12 🗄 Database: PostgreSQL, SQLAlchemy (ORM) 🔄 Migrations: Alembic 🐳 Infrastructure: Docker & Docker Compose What’s Next? Currently, it’s running perfectly in my local Docker environment. The next step? I'm moving it to AWS (EC2/RDS) to learn cloud deployment and security groups. Stay tuned—I'll be making the API live in a few days! I'd love to hear your thoughts on the architecture. #Python #FastAPI #BackendDevelopment #Docker #PostgreSQL #SoftwareEngineering #AWS
To view or add a comment, sign in
-
🧪 #PythonJourney | Day 149 — Testing API Endpoints & Validating Backend Today was about validating that everything works end-to-end. After days of building, it was time to test the actual API with real requests. Key accomplishments: ✅ All 8 API endpoints are functional: • POST /api/v1/urls (create shortened URL) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{id} (get URL details) • GET /api/v1/urls/{id}/analytics (get analytics) • DELETE /api/v1/urls/{id} (soft delete) • GET /{short_code} (redirect & track) • GET /health (health check) ✅ Database integration fully operational: • User authentication via API key works • URL creation with validation • Click tracking with proper foreign keys • Analytics aggregation ready ✅ Docker environment stable: • PostgreSQL 15 storing data correctly • Redis 7 ready for caching • FastAPI container running smoothly • All services healthy ✅ Tested with curl: • Health check endpoint responds • API authentication working • Request/response validation functioning • Error handling in place ✅ Code committed to GitHub: • Clean commits with meaningful messages • Full project history tracked • Ready for collaboration What I learned today: → End-to-end testing reveals integration issues early → API key authentication is simple but effective → Docker composition makes local development seamless → Curl is a powerful tool for API testing → Validating one endpoint at a time saves debugging time The backend is now production-ready in terms of basic functionality. Next: comprehensive testing with pytest and then deployment. Current status: - Backend: ✅ Functional - Database: ✅ Operational - API Endpoints: ✅ All working - Docker: ✅ Stable - Tests: ⏳ Next step - Deployment: ⏳ After tests #Python #FastAPI #API #Testing #Backend #Docker #PostgreSQL #SoftwareDevelopment #DevOps
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development