🔧 #PythonJourney | Day 150 — Debugging Production Issues & Learning Persistence Today was about persistence in the face of challenges. Sometimes the best learning comes from solving problems that don't have obvious solutions. Key accomplishments: ✅ Built complete backend architecture: • 7 fully functional API endpoints • SQLAlchemy ORM with 5 production-grade models • PostgreSQL integration with proper relationships • Redis caching layer configured • Celery async task queue set up • Docker multi-container orchestration ✅ Implemented critical features: • User authentication via API key • URL creation with custom slugs • Click tracking with event logging • Analytics aggregation ready • Soft delete pattern for data preservation • Password-protected URLs with bcrypt hashing • Audit logging for compliance ✅ Database design mastery: • UUID primary keys with proper type casting • Foreign key relationships with cascade deletes • PostgreSQL-specific types (JSONB, INET, UUID) • Index optimization for query performance • Relationship configurations with back_populates ✅ Docker expertise: • Multi-service orchestration (PostgreSQL, Redis, FastAPI, Celery, Celery Beat) • Health checks for service dependencies • Environment-based configuration • Volume management for data persistence What I learned today: → Debugging is a critical skill - sometimes it takes multiple attempts → Small details matter (endpoint ordering, type compatibility) → Persistence pays off - keep trying different approaches → Understanding error messages is half the solution → Building production systems is incremental and iterative Progress summary (Days 143-150): - ✅ Project architecture designed - ✅ SQLAlchemy models created - ✅ FastAPI endpoints implemented - ✅ Docker environment configured - ✅ Database connectivity verified - ✅ Authentication implemented - ✅ Test user creation working - ⏳ Endpoint testing (WIP) The foundation is solid. API endpoints are ready for comprehensive testing with pytest and then deployment to GCP. This journey taught me that backend development is about building reliable, scalable systems piece by piece. Each layer matters - from database design to API routing to Docker orchestration. #Python #FastAPI #Backend #Docker #PostgreSQL #SoftwareDevelopment #CodingJourney #Persistence #Learning
Marcos Vinicius Thibes Kemer’s Post
More Relevant Posts
-
🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
To view or add a comment, sign in
-
-
A few weeks ago, I took on an end-to-end Docker project thinking it would be straightforward. I came out thinking completely differently about how software is actually built. Here is what I built: 🐳 A multi-container data architecture, a Streamlit application talking to a PostgreSQL database, orchestrated seamlessly with Docker Compose. The data flow looks like this: → A user uploads a CSV through the Streamlit UI. → SQLAlchemy processes and persists the data into PostgreSQL. → The database serves summary statistics back to the interface. → All of this runs in isolated containers that find each other by name, not by IP. Three things I will never forget from this project: 1️⃣ Your Dockerfile order is a caching strategy, not a style choice. Code changes constantly - so COPY . . goes at the bottom. Dependencies change rarely, so pip install goes at the top. Flip that, and every tiny typo fix costs you a full 3-minute rebuild. 2️⃣ Volumes are what give databases a memory. Containers are ephemeral by design. Without a named volume mounted to /var/lib/postgresql/data, every docker compose down takes your data with it. 3️⃣ POSTGRES_HOST=app-db just works. Docker Compose creates an internal network and handles DNS resolution automatically. Your app never needs to know a container's dynamic IP address. Shoutout to the Data Engineering Community for being part of this space that keeps pushing me to go deeper than just "making things work." That accountability matters. If you are learning Docker or containerizing your first data pipeline, the repo is open. Clone it, break it, learn from it. Links to the code and my full architectural write-up are in the comments! 👇 #DataEngineering #Docker #Data #Python #Streamlit #PostgreSQL #LearningInPublic #SoftwareArchitecture #DevOps #BuildInPublic
To view or add a comment, sign in
-
-
Building a URL Shortener sounds simple until you have to handle database collisions and clean API redirects. 🚀 Hey LinkedIn family! 👋 Saif here. I recently wrapped up a new Backend project: a Production-Ready URL Shortener API. My goal wasn't just to make it work, but to understand how to build scalable, containerized backend systems. The Features (What it does) Short-Code Generation: Custom logic to create unique, collision-resistant URLs. Smart Redirects: Handling 302 redirects with real-time click tracking. Analytics: Dedicated endpoints to monitor URL performance. URL Management: Ability to deactivate links on the fly. The "Under the Hood" (The Deep Tech) This is where the real learning happened. I didn't just write Python; I built a mini-infrastructure: FastAPI & Pydantic: For strict data validation and lightning-fast performance. PostgreSQL & SQLAlchemy: Managing relational data with clean ORM patterns. Alembic: Handling database migrations (version control for my DB schema). Dockerized Environment: I used Docker to isolate the PostgreSQL environment, managing complex port mappings to avoid host-system conflicts. The Tech Stack 🛠 Backend: FastAPI, Python 3.12 🗄 Database: PostgreSQL, SQLAlchemy (ORM) 🔄 Migrations: Alembic 🐳 Infrastructure: Docker & Docker Compose What’s Next? Currently, it’s running perfectly in my local Docker environment. The next step? I'm moving it to AWS (EC2/RDS) to learn cloud deployment and security groups. Stay tuned—I'll be making the API live in a few days! I'd love to hear your thoughts on the architecture. #Python #FastAPI #BackendDevelopment #Docker #PostgreSQL #SoftwareEngineering #AWS
To view or add a comment, sign in
-
I built an Apache Airflow 3 orchestration setup to run batch jobs from CSV files. Runs are API-triggered not cron. Something upstream drops a file in S3, sends a small JSON conf (date folder, run UUID, job type), and Airflow takes it from there. 𝗛𝗼𝘄 𝗼𝗻𝗲 𝗿𝘂𝗻 𝗳𝗹𝗼𝘄𝘀: → Download file from S3 (predictable path: bucket + prefix + date + UUID) → Split into CSV chunks for parallel processing → Each chunk runs as its own task (controlled via max parallelism, pools, and chunk size) → Adaptive chunk sizing: picks chunk size based on row count to avoid over-fragmenting small runs or creating monster chunks on large ones → Chunk workers route to the right pipeline module - same DAG handles staging and production via env flag → Outputs merged into final CSVs, uploaded back to S3 alongside the run's inputs → On failure: downstream systems get notified so Slack/UI can mark the run failed not just "red in Airflow" → Every batch can be paused and resumed, on cancellation, resources are released cleanly. 𝗪𝗵𝗮𝘁 𝗜 𝗯𝘂𝗶𝗹𝘁: • Standardized DAG pattern across all partner workflows (download → split → process → merge → upload + failure hooks) • Config via Airflow Variables (JSON per DAG) - paths, S3, chunk size, parallelism, pools, adaptive chunking toggles - ops can tune without touching DAG code • Global and per-run cancel controls to stop bad batches cleanly • GitLab CI pipeline: syntax checks all Python under dags/ and scripts/ before merge to main • EC2 deploy: git pull + systemctl restart across all Airflow 3 services (API, scheduler, DAG processor, triggerer), the DAG processor is its own process in v3, so if it's unhealthy, DAGs won't load reliably The business logic came from older scripts we migrated over. My focus was orchestration, config surfaces, deploy discipline, and making runs predictable and operable in production. #Airflow #Orchestration #Python #AWS #BatchProcessing
To view or add a comment, sign in
-
🧪 #PythonJourney | Day 149 — Testing API Endpoints & Validating Backend Today was about validating that everything works end-to-end. After days of building, it was time to test the actual API with real requests. Key accomplishments: ✅ All 8 API endpoints are functional: • POST /api/v1/urls (create shortened URL) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{id} (get URL details) • GET /api/v1/urls/{id}/analytics (get analytics) • DELETE /api/v1/urls/{id} (soft delete) • GET /{short_code} (redirect & track) • GET /health (health check) ✅ Database integration fully operational: • User authentication via API key works • URL creation with validation • Click tracking with proper foreign keys • Analytics aggregation ready ✅ Docker environment stable: • PostgreSQL 15 storing data correctly • Redis 7 ready for caching • FastAPI container running smoothly • All services healthy ✅ Tested with curl: • Health check endpoint responds • API authentication working • Request/response validation functioning • Error handling in place ✅ Code committed to GitHub: • Clean commits with meaningful messages • Full project history tracked • Ready for collaboration What I learned today: → End-to-end testing reveals integration issues early → API key authentication is simple but effective → Docker composition makes local development seamless → Curl is a powerful tool for API testing → Validating one endpoint at a time saves debugging time The backend is now production-ready in terms of basic functionality. Next: comprehensive testing with pytest and then deployment. Current status: - Backend: ✅ Functional - Database: ✅ Operational - API Endpoints: ✅ All working - Docker: ✅ Stable - Tests: ⏳ Next step - Deployment: ⏳ After tests #Python #FastAPI #API #Testing #Backend #Docker #PostgreSQL #SoftwareDevelopment #DevOps
To view or add a comment, sign in
-
-
NOBOT - Status Update: Docker, ORM, and DB Design 🚀 Building in public means showing the process, even when it’s not "pixel-perfect". Here’s what’s happening with NOBOT: 1. Why PostgreSQL? 🐘 After a long brainstorming session (SQLite vs. one central hub), I decided to bet on PostgreSQL. With all the relations I have planned, Postgres just felt like the right, solid choice for the backbone of the system. 2. The Docker Dopamine Hit 🐳 There’s nothing like the satisfaction of seeing docker-compose up working for the first time. I just pushed the Dockerfile and docker-compose.yml to GitHub. Seeing the DB and data services running in containers gave me a huge smile – a small win for the infrastructure! 3. From Pydantic Contracts to ORM 🧹 I’m using Contract-Driven Development, so I started with Pydantic models to define how data should flow. Now I’m bridging that with an ORM. It lets me map my contracts directly to the database, keeps the code clean, and saves me from writing manual SQL modules. 4. Thinking Out Loud 🧠 The attached diagram is my current "brain dump." It’s changing every hour as I review my early contracts and refine the logic. Visualizing the schema helps me find edge cases before they turn into annoying bugs. I’m curious — what are your favorite tools for database design? I’m using dbdiagram.io right now, but sometimes I still think about the chaotic energy of Microsoft Paint! 😂 #BuildInPublic #Python #PostgreSQL #Docker #AI #RPA #SoftwareEngineering #NOBOT #IDP
To view or add a comment, sign in
-
-
FastAPI + Supabase + Docker: The Modern Developer’s Power Trio? I recently built a Task Management API, and here’s why I’m never going back to old-school setups: ✅ FastAPI: Because life is too short for slow APIs and manual documentation. (Swagger UI is a lifesaver! 📖) ✅ Supabase: All the power of PostgreSQL without the headache of managing a local DB. ✅ Docker: Wrapping it all up in a container so I never have to hear "But it was working 5 minutes ago..." 🐳 The Result? A professional-grade CRUD application that’s ready for the cloud. But honestly? The biggest takeaway wasn't the code. A few months ago, I genuinely thought Backend Engineering wasn't for me. I was underestimating my own capabilities, fearing the "unknown" parts of architecture, and honestly, I was scared of failing at it. This project showed me the true essence of what Backend Engineering is. It's not just about writing main.py; it's about the discipline of managing .dockerignore, securing .env files, and understanding how different systems talk to each other. Completing this didn't just give me an app; it helped me face my fear of failure. I’m proud of where I am today compared to where I was 3 months ago. P.S. I also finally remembered my LinkedIn password after being inactive for months. I guess consistency starts with logging in first! 😂 📁 Check out the journey on GitHub: https://lnkd.in/d3eNgEGt #Python #DevOps #FastAPI #Supabase #Containerization #SoftwareEngineering #GrowthMindset #BuildingInPublic #BackendDeveloper
To view or add a comment, sign in
-
-
# New learning and Self imporvement Modern backend systems often don’t fail because of scale alone — they struggle due to complexity. In a recent architecture redesign, the focus was on simplifying how dynamic, large-scale form data is handled while improving performance, maintainability, and developer experience. The shift (as shown in the diagram): 🔹 Moved from rigid column-based schema → flexible JSONB-based storage 🔹 Replaced heavy raw SQL usage with clean Python ORM-driven data access 🔹 Introduced structured payload handling with clear state management (status-driven flow) ⚙️ Backend Architecture Improvements ✔️ Adopted a modular design using Django applications for better separation of concerns ✔️ Implemented class-based views for cleaner, reusable API logic ✔️ Structured API routing using Django Ninja Router for better organization and scalability ✔️ Reduced the number of APIs by consolidating responses into optimized, meaningful endpoints ✔️ Designed APIs in collaboration with frontend to ensure smooth data flow and minimal overhead 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Managed 300–500+ fields without schema changes → Simplified debugging with structured payload visibility → Enabled faster iteration without impacting production stability Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard Reflection 🚀 Outcome ✔️ Reduced system complexity significantly ✔️ Improved API performance and response clarity ✔️ Eliminated production risks caused by excessive raw queries ✔️ Created a scalable foundation for handling dynamic data ✔️ Delivered a smoother integration experience for frontend systems Security handled using JWT-based authentication with proper token flow. The system continues to evolve with ongoing improvements in validation, background processing, and performance tuning. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT #SoftwareArchitecture
To view or add a comment, sign in
-
-
I used to think version control was for "proper" software engineers. Not for analysts writing SQL in a UI. Not for me. Then I broke a scheduled query that had been running for two years. Nobody knew who wrote it, nobody knew what it was feeding, and nobody had any way of knowing what it looked like before I touched it. That was the moment I stopped thinking of Dataform as a DevOps thing and started thinking of it as a self-preservation thing. The shift isn't really about Git. It's about being able to answer the question "what changed and when?" without it being a full investigation. In BigQuery's UI, that question has no good answer. In Dataform, it's just a commit history. If you're still running scheduled queries directly in BigQuery, I'm not here to tell you you're doing it wrong. But I will say: the day something breaks in a pipeline with no version history is a very specific kind of stressful. Have you made the move to Dataform yet? And if so, what finally pushed you to do it? #BigQuery #Dataform #DataEngineering
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development