Sunday ship: @perryts/postgres is out. A pure-TypeScript Postgres driver that speaks the wire protocol directly. No libpq. No native addons. No FFI. Why another one? → Every Node Postgres driver worth using wraps libpq or ships a platform-specific .node addon. That's a dead end for AOT - Perry compiles TypeScript to a statically-linked native binary via LLVM, and there's no V8 at execution time to host a C addon into. → The Perry-native Postgres GUI this drives (Tusk) needed things most drivers throw away for ergonomics: exact numeric (not float), full column metadata (attnum, tableOid, typmod), structured errors with every documented ErrorResponse field, and raw row bytes on demand. So: one TypeScript source, three runtime targets. → Node.js 22+ → Bun 1.3+ → Perry AOT → 4.6 MB static binary, 1.8 MB RSS, single-digit-ms cold start Honest performance story: V8's JIT beats Perry-native on per-query wall time in a warm long-running process. Perry wins everywhere else — cold start, memory footprint, deploy size, and the platforms Node and Bun can't reach at all (CLIs, serverless cold paths, mobile, embedded Linux). What's in the box: SCRAM-SHA-256 / MD5 / cleartext auth, TLS with mid-stream upgrade, simple + extended query, 20 type codecs, exact numeric via a Decimal wrapper, structured PgError, cancel protocol, LISTEN/NOTIFY, connection pool, transactions, libpq URLs, PG* env vars. MIT. Feedback welcome. https://lnkd.in/dyeDTJG7
Ralph Kuepper’s Post
More Relevant Posts
-
Leveling up the backend journey! After mastering core REST APIs and demystifying Spring Security with JWTs in my previous projects, it was time to push the boundaries with complex data relationships and a brand-new database. What happened: I recently built and designed "Quill" ~ a fully-fledged blogging and article publishing platform. While I had a blast designing the frontend to bring the platform to life, my true mission was under the hood: architecting a scalable backend capable of handling posts, user interactions, and media. What I learned: This project was a massive step up in complexity. I engineered the backend with Java and Spring Boot, but this time, I leveled up my tech stack: 🔹 PostgreSQL Debut: This was my first time using Postgres, and honestly? It was incredibly fun! Transitioning to it gave me a fresh perspective on database management, and I really enjoyed leveraging its robustness for this project. 🔹 Complex Data Relationships: I went deep into Spring Data JPA, mapping out complex One-to-Many and Many-to-Many relationships across Users, Posts, Comments, and Tags without compromising query performance. 🔹 Multipart File Handling: I stepped out of pure text/JSON data and implemented a custom Image Controller to securely handle, store, and serve multipart file uploads for article cover images. 🔹 Security at Scale: I successfully carried over the custom JWT authentication architecture from my previous "LetsGo" project, applying it to a much larger surface area to protect diverse endpoints, user roles, and content ownership. Key takeaway: Building "Quill" taught me that a well-structured database schema is the heartbeat of any good application. Moving to Postgres and handling complex table relations proved that when your backend architecture is solid, scaling the rest of the application feels incredibly natural. Github Link - https://lnkd.in/gxz9a7eY What was your experience like when you first switched databases, or when you first tried PostgreSQL? Let me know in the comments! 👇 #Java #SpringBoot #PostgreSQL #BackendDevelopment #DatabaseDesign #RESTAPI #ProjectBasedLearning #SoftwareEngineering #LearningInPublic #DeveloperJourney
To view or add a comment, sign in
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
Part 1: Architecture & Real-World System Design Modern backend systems don’t break because of scale alone — they break due to complexity. In a recent redesign, the focus was on simplifying the handling of large, dynamic form data while improving performance, maintainability, and the developer experience. 📊 The shift: 🔹 From rigid column-based schema → flexible JSONB-based storage 🔹 From heavy raw SQL → clean ORM-driven queries 🔹 From scattered APIs → structured, minimal endpoints ⚙️ Architecture Improvements ✔️ Modular design using separate Django applications ✔️ Class-based views for reusable and maintainable logic ✔️ API structuring using Django Ninja Router ✔️ Reduced the number of APIs by consolidating responses ✔️ Strong alignment with frontend for payload and contract design 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Handled 300–500+ fields without schema changes → Simplified debugging with structured payloads → Enabled faster iteration without production risks 🔄 Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard reflects real-time updates 🚀 Outcome ✔️ Reduced schema complexity ✔️ Improved API performance ✔️ Avoided production issues caused by raw queries ✔️ Built a scalable and flexible backend system ✔️ Delivered smoother frontend-backend integration Security handled via JWT-based authentication with proper token flow. Still evolving with improvements in performance, validation, and system design. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT
To view or add a comment, sign in
-
156 SQL migrations and no backend server. That's what the mydba.dev architecture looks like after a year of development. Zero backend application code. Every API endpoint is a PostgreSQL function. The stack is almost absurdly simple: • React + TypeScript frontend (Vercel) • PostgREST auto-generates REST endpoints from the database schema • Clerk JWTs validated via JWKS • Row-level security handles authorization • A Go collector writes metrics directly to PostgreSQL Adding a new API endpoint means writing `CREATE FUNCTION` in a SQL migration file. Not a route handler. Not a controller class. Just SQL. 𝗪𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗿𝗲𝗮𝗹𝗹𝘆 𝘄𝗲𝗹𝗹: Deployment simplicity. There's no backend to deploy, scale, or monitor. The frontend ships via `git push`. Database changes ship via migration files. That's the entire deployment process. Performance is excellent. PostgREST is fast, and PostgreSQL functions with proper indexes are fast. No ORM overhead, no serialization layers, no N+1 query problems. The database IS the truth. 𝗪𝗵𝗮𝘁'𝘀 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝗵𝗮𝗿𝗱: Debugging SQL functions is painful compared to stepping through Python or Go. Stack traces are cryptic. Testing is awkward -- you're essentially writing integration tests against a real database. Schema migrations on compressed TimescaleDB hypertables are a special kind of adventure. You can't just ALTER TABLE casually when you have columnar compression enabled. I've built patterns around it (rename tables, security-barrier views, careful migration ordering), but it's complexity that a normal backend wouldn't have. There's no middleware layer. Cross-cutting concerns like request logging, rate limiting, and input validation all need creative solutions. Some of those solutions are elegant. Some are ugly. All of them live in SQL. Would I do it again? Absolutely. But I'd invest in better migration tooling earlier. And I'd accept from day one that some things are just harder in SQL -- and that's a worthwhile tradeoff for the simplicity you get everywhere else. Anyone else running a PostgREST-only architecture in production? I'd love to compare notes. #PostgreSQL #PostgREST #BuildingInPublic #Architecture #SoftwareEngineering
To view or add a comment, sign in
-
I spent 10 days building an AI agent from scratch. It just triaged a real GitHub issue in 3.88 seconds. Label prediction F1: 0.967. Here's how 10 days changed everything I thought I knew. It started with a problem I kept seeing. Open source maintainers get buried in issues. Half are duplicates. Most are mislabeled. Nobody knows which file to look at. And the person who could fix it fastest doesn't even know the issue exists. I thought: what if a bot could read your entire codebase and triage every issue the moment it lands? Day 1, I didn't even understand how GitHub talks to external apps. Spent hours learning what webhooks are, what HMAC signatures do, why smee.io exists. Got my first real payload at midnight. Stared at the terminal like I'd won something. Day 3, I learned that splitting code by character count is useless. A function sliced in half means nothing to a retrieval system. Built three separate chunkers — tree-sitter for code, heading-aware for docs, thread-aware for issue conversations. Day 5, the graph layer clicked. Issues link to PRs. PRs link to files. Files link to other issues. Standard RAG ignores all of that. I wrote a 1-hop expander that follows those edges after retrieval. Two SQL queries. Max 20 neighbor chunks. No N+1. Day 6, the bot posted its first real GitHub comment. Watched the Celery logs. Saw the webhook arrive. Saw retrieval run. Saw the comment appear. I didn't touch anything. It just worked. Day 8, production broke me. Supabase needed three separate fixes to connect. Statement cache size. Transaction pooler URL. IPv6. "Just use Postgres" is the biggest lie in deployment. Then came issue #28. I opened a real issue on my own repo. A question about missing authentication docs. 60 seconds later the bot replied. It cited the exact file — backend/app/api/search.py. It pulled the real env variable names from my codebase. GITHUB_APP_ID. VOYAGE_API_KEY. Not hallucinated. Retrieved from the actual code. Read it. Reasoned from it. Cited it. That's the moment it stopped being a project and became something that actually knows your code. The final numbers: → Label F1: 0.967 → Precision: 0.950 / Recall: 1.000 → Latency: 3.88s avg, 4.24s p95 → 150 tests. 0 failing. 102 commits. → Deployed: Render + Supabase + Upstash + Qdrant Cloud + Vercel 10 days ago I was mass-applying to internships hoping someone would notice my resume. Today I have a deployed AI agent, real eval metrics, and a build-in-public series that taught me more than any course ever did. The resume got better. But that's not the point. The point is I can build things now. If you want the full architecture breakdown: → Comment "SHIPPED" → Connect with me
To view or add a comment, sign in
-
Why I stopped relying on .filter(tenant=user.tenant) for multi-tenancy. When you’re starting out, queryset filtering in Django feels like the standard way to handle multi-tenancy. You add a tenant_id to your models, and you remember to filter every query. It works until it doesn't. One forgotten filter in a complex join, or one developer rushing a hotfix, and suddenly Tenant A is seeing Tenant B’s sensitive data. In the world of enterprise contracts and security audits, "oops" isn't a valid defense. For my recent B2B builds, I’ve moved the security perimeter from the application code to the database engine using PostgreSQL Row-Level Security (RLS). Here's why; ✅ The "Fail-Closed" Standard: Isolation is enforced by the Postgres engine. Even if your Django middleware fails to set the tenant context, the database defaults to returning zero rows. You are protected by the database constraints, not just a line of Python. ✅ Infinite Tenant Scaling: The "Schema-per-tenant" approach makes Django Migrations a nightmare once you hit 500+ clients. RLS keeps your database catalog lean. You get enterprise-grade isolation with the simplicity of a single shared schema. ✅ Tenant-Blind Development: It removes the cognitive load of repetitive filtering. Your Django Views and Serializers stay clean and "tenant-agnostic." Your team can focus on shipping features without the constant anxiety of cross-tenant data leaks. This is Part 1, in Part 2 I’ll break down the technical "Proof of Work"—the exact bridge between Django Middleware and Postgres session variables. How are you handling isolation? Still trusting every developer to remember that .filter() call?
To view or add a comment, sign in
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
Started this as a weekend thing. I wanted a rate limiter that actually fit how I build Node APIs: TypeScript, the frameworks I use day to day, and real stores once you're past localhost. Wasn't trying to publish anything. Just wanted to learn by building. What changed the shape of the project was how I used AI. Not as a code vending machine, but as someone to think out loud with. We'd sketch the API, I'd argue about edge cases, we'd rewrite the parts that felt off, add tests, kept going. The boring scaffolding got quick. The harder stuff got the attention it actually needed: distributed limits, what happens when Redis goes down, getting metrics out without baking in a specific vendor. That weekend thing is now ratelimit-flex. It plugs into Express, Fastify, NestJS, and Hono. Backed it with Redis, Postgres, Mongo, DynamoDB, or keep it in-memory for dev. Sliding window, token bucket, fixed window. Hooks for metrics and a few resilience patterns I kept reaching for at work. I don't think "AI wrote it" is the interesting part. The interesting part is that a clear idea, the patience to sit with the hard problems, and AI handling the grunt work can take a weekend curiosity and land it somewhere you'd actually trust next to production traffic. If you're writing APIs in TypeScript, I'd love eyes on it. And honestly, tell me what breaks: 🔗 https://lnkd.in/gFRjDUWq 🔗 https://lnkd.in/g4guDyEz #OpenSource #TypeScript #NodeJS #API #Ratelimiter
To view or add a comment, sign in
-
📦 Day 36 #90DaysOfDevOps 🚨 From “It works on my machine” → to a fully Dockerized 2-tier app Today’s learning hit different. I built and containerized a Flask + MySQL application, and what looked simple at first quickly turned into a deep dive into how things actually work behind the scenes. 💥 It started with a simple goal: “Run my Flask app inside Docker.” But then… ❌ My app couldn’t connect to MySQL → Turns out, localhost inside a container ≠ my machine ❌ Build kept failing with pkg-config not found → Learned that some Python packages (like mysqlclient) need system-level dependencies ❌ Even after fixing everything, app still crashed → MySQL wasn’t “ready” when Flask started 🔍 Here’s what I implemented to fix it: ✅ Created a custom Docker network for container communication ✅ Replaced localhost with service name (db) ✅ Installed required system packages (gcc, libmysqlclient-dev, pkg-config) ✅ Added healthchecks using mysqladmin ping ✅ Used depends_on with service_healthy to ensure proper startup order ✅ Secured the container by using a non-root user ✅ Managed configs using environment variables ⚙️ Final setup: Flask app running in one container MySQL running in another Both connected via Docker network Fully reproducible with Docker 📦 Docker Hub (pull & run): https://lnkd.in/gt3749CC 📁 GitHub: https://lnkd.in/gZ5g623i 💡 Biggest takeaway: Containerization is not just about “Docker build & run” — it’s about understanding: networking dependencies startup timing and debugging real failures This project felt like a real DevOps scenario rather than just a tutorial. If you’ve faced similar issues while working with Docker, would love to hear your experience 👇 #Docker #DevOps #Flask #MySQL #LearningInPublic #BuildInPublic #OpenToWork #dockerproject #TrainWithShubham
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development