"That's one small step for a man, one giant leap for my backend." 🚀 Today I migrated my Habitual API from SQLite to PostgreSQL. It worked locally - and immediately broke in CI. The technical blocker 🛠️ The heatmap endpoint generates a date range for the last 30 days using a recursive CTE. In SQLite this worked: func.date(func.now(), "-29 days") PostgreSQL doesn't support this modifier syntax. One line, two hours of debugging. I ended up moving the date generation to Python instead of SQL. Cleaner and more portable. The CI struggle 🏗️ Seven commits. Six failures. All named "Little update for CI" - at that point naming stopped mattering (see screenshot 👇). The root cause: my local environment had pinned versions that weren't reflected in requirements.txt. CI pulled newer packages - and everything fell apart. After a few iterations: ✅ 271 tests passed ✅ PostgreSQL running in CI ✅ migrations applied on every push ✅ clean pipeline Next step: Docker and CD. #python #fastapi #postgresql #githubactions #backend
Migrating Habitual API from SQLite to PostgreSQL
More Relevant Posts
-
A solid example of how a backend system’s core architecture comes together in practice: authentication, security, and data handling. A great foundation for further scaling
I just built a Task Management REST API using FastAPI and PostgreSQL Over the past week I focused on building a backend system where users can securely manage their own tasks. Key features I implemented: ✅ User signup and login with JWT authentication ✅ Password hashing using bcrypt ✅ Protected task routes ✅ Full CRUD operations ✅ PostgreSQL database integration using SQLAlchemy This project helped me understand how real backend systems manage authentication and enforce task ownership per user. Next step: deploying this API online and containerizing it with Docker. GitHub Repo: https://lnkd.in/dUFGBFMx #python #backend #fastapi #restapi #softwareengineering #learninginpublic
To view or add a comment, sign in
-
-
Built a production-style backend system from scratch using FastAPI and PostgreSQL. Core components: REST API with FastAPI PostgreSQL database with SQLAlchemy ORM JWT-based authentication (access + refresh tokens) User management (signup, login, update, delete) File upload and download system with integrity checks (SHA-256) System design: Modular architecture (routers, models, schemas, utils) Separation of concerns across layers Database schema with relationships and constraints Secure password handling with bcrypt What this phase focused on: structuring a backend like a real system, not a script handling data flow from request → database → response enforcing validation and consistency building endpoints that are actually usable and testable GitHub: https://lnkd.in/dEEsGg3G
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
To view or add a comment, sign in
-
-
I rebuilt a FastAPI CLI (FApier) to scaffold full projects in <10 seconds. Then I tried to make it production-ready. It worked… until I considered real setups: async DBs, env configs, and multiple integrations in one command. That’s where things got interesting. CLI engine 👉 scaffolds full projects in seconds (no manual setup) Async SQLAlchemy + PostgreSQL 👉 ready for high-concurrency APIs MongoDB (Motor + Beanie) 👉 alternative data layer in 1 command Auto run command 👉 detects env + entry point (0 config) Built-in modules 👉 JWT auth, Redis, WebSockets, background jobs The real challenge was designing a system flexible enough to handle multiple architectures without becoming messy. <10s project generation 0 manual configuration to run multiple stack combinations (SQL + NoSQL + async) and you can test it by running: pip install fapier fapi --help that's it! I broke down the full architecture (and trade-offs) here 👇 https://lnkd.in/e5R4H_dn If you were building this, would you keep everything in one CLI or split generators by use case? #FastAPI #Python #Backend #OpenSource #CLI #DeveloperTools
To view or add a comment, sign in
-
-
🗓️ Release Notes — April 27, 2026 🔎 Span attribute filtering across the stack Pass attribute filters from the Python client, TypeScript client, REST API, or CLI. Type-aware (int/float/bool/str match their stored types). Filters are ANDed together. https://lnkd.in/ecJ-PQK9 🔐 Secrets settings page Admins can add, replace, delete, and search encrypted LLM provider credentials (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) directly in the UI—no REST calls. https://lnkd.in/eynXBPT2 🧪 Claude Opus 4.7 in the Playground https://lnkd.in/gwuzYJTk 🧬 `trace_id` in experiment evaluators Add a `trace_id` kwarg to any evaluator and Phoenix passes the originating trace ID for each run. Works sync/async, function- or class-based. Useful for trajectory evals. https://lnkd.in/e6_resgS ☁️ Azure Managed Identity for PostgreSQL Connect Phoenix to Azure Database for PostgreSQL with Entra managed identity—no static DB password required. https://lnkd.in/euepx9-9 📝 CLI span notes Add notes via `px span add-note <span-id> --text "..."`, and include notes with `--include-notes` on `px span list` / `px trace get`. https://lnkd.in/euzV22dy 📌 Full release notes https://lnkd.in/eFvGJ_Cy
To view or add a comment, sign in
-
For a thousand and one reasons, its been ages since I posted here, And now everyone is saying "build in public",, well I am not giving any guarantees but I will try.... (I did this one in private though, I am only making a post about it) So as a young boy who cares about developer experience, I built a CLI tool that improves your developer experience as a Rust dev. It is called supabase-rust-gen, a binary crate that enables you generate type-safe Rust structs from your Supabase database schema. Like supabase-js type generation, but for the Rust ecosystem. Manually writing Rust structs for your Supabase tables is tedious and error-prone. Column names change, types drift, and nullable fields get missed. supabase-rust-gen eliminates this by: - Connecting directly to your Supabase project's PostgREST endpoint - Reading the OpenAPI spec to understand your exact schema - Generating idiomatic Rust with proper Serde derives - Handling edge cases like JSONB, arrays, nullable fields, and PostgreSQL types So that being said, its been published on crates.io Link Here: https://lnkd.in/eWYX5pDZ Repo: https://lnkd.in/epfp6wsD Test it out, build projects and give feedback...
To view or add a comment, sign in
-
-
⚡ Connection Pooling in FastAPI with PostgreSQL (Why it matters) When I started building APIs with FastAPI + PostgreSQL, I made a common mistake 👇 👉 Opening a new database connection for every request It worked… until traffic increased 😅 ❌ Problem: Too many open connections Slower response times Database overload 💡 Solution: Connection Pooling Instead of creating new connections every time, we reuse a pool of existing connections. ✅ Benefits: ✔ Faster API responses ✔ Better resource management ✔ Handles high traffic efficiently 🔧 Example (SQLAlchemy): from sqlalchemy import create_engine engine = create_engine( "postgresql://user:password@localhost/db", pool_size=10, max_overflow=20, pool_timeout=30 ) 💡 What I learned: If you're building production APIs with FastAPI, connection pooling is not optional — it's essential. 🚀 Next step: Combining this with async DB handling for even better performance #FastAPI #PostgreSQL #Backend #Python #APIs #WebDevelopment
To view or add a comment, sign in
-
-
🗄️ You can’t build RAG without storing vectors somewhere. Choose badly now… pay for it later. Day 8 of building Brio — I evaluated vector database options and chose PGVector via Neon. What I considered: → Pinecone — polished managed experience, but another service + extra cost → Weaviate / Qdrant — powerful, but more infra to manage → PGVector on Postgres — familiar, flexible, fewer moving parts I chose Neon + PGVector. Why: → It’s Postgres (already know how to operate it) → Vector similarity search built in → Serverless scaling with Neon → Works cleanly with Spring AI VectorStore → SQL + vectors together = easier metadata filtering Big lesson: The best engineering choice is rarely the most exciting one. It’s the one you can ship, debug, and maintain without 2am drama. For an early-stage RAG product: Would you choose a dedicated vector DB… or Postgres + PGVector? #RAG #SpringAI #Java #AIEngineering #VectorDB #Pgvector #postgres
To view or add a comment, sign in
-
𝗜 𝗯𝘂𝗶𝗹𝘁 𝗮 𝗥𝗔𝗚-𝗯𝗮𝘀𝗲𝗱 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗳𝗿𝗼𝗺 𝘀𝗰𝗿𝗮𝘁𝗰𝗵 🚀 Teams waste hours digging through internal documents for answers. 𝗗𝗼𝗰𝗦𝗲𝗻𝘀𝗲 fixes that — upload documents, ask questions, and get instant, cited answers grounded in your data. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: 1. Upload a document → triggers an async Step Functions pipeline that chunks the PDF, generates embeddings (all-MiniLM-L6-v2 via DJL/PyTorch), and indexes vectors into OpenSearch k-NN. 2. A query is embedded into a vector → OpenSearch performs semantic search to retrieve the most relevant chunks. 3. Retrieved context is sent to Bedrock → generates a grounded answer with inline citations. 𝗞𝗲𝘆 𝗵𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀: - Fully serverless ingestion with S3-staged batching to handle large documents without payload limits - k-NN vector search with cosine similarity for semantic retrieval (no keyword matching) - Bedrock Converse API with tool-use to enforce structured, cited outputs 𝗧𝗲𝗰𝗵: Java 24, Spring Boot 3.5, AWS (Bedrock, OpenSearch, Step Functions, Lambda, S3), DJL/PyTorch, PostgreSQL Sharing the HLD and GitHub link below 👇 GitHub: https://lnkd.in/gsb9ZsV6 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝘄𝗲𝗹𝗰𝗼𝗺𝗲 — what would you do differently? #RAG #Java #SpringBoot #AWS #OpenSearch #Bedrock #VectorSearch #SoftwareEngineering #SystemDesign
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Here is my GitHub: https://github.com/Zhelero/habitual_api