Phillip Merrick made a point in The New Stack (https://hubs.la/Q04dPhph0) worth pausing on: AI agents only stop hallucinating when you give them the actual enterprise data... and that data already lives in Postgres. 🐘 If you're wiring an LLM into a Postgres database, the pgEdge MCP Server is open source and built for that exact job. Postgres 14+, read-only by default, and handles real workflows, like analysts running ad-hoc questions in plain English, developers debugging schemas without leaving their editor, and DBAs pulling index recommendations from inside Claude Code or Cursor. It's coming from the team that maintains pgAdmin, so Postgres knowledge is baked into the server itself - not bolted on after the fact. Token usage is genuinely tuned with TSV output, auto-pagination, and context compaction. 🛠️ ⭐ Star the repo, clone it, point it at any new or existing PostgreSQL database (including Supabase, RDS, & more): https://hubs.la/Q04dPdmC0 #devops #aiengineering #programming #sideprojects #postgres #mcp #opensource #supabase #aws #amazon #rds #cloudsql #heroku #postgresql
Postgres Data for AI Agents to Stop Hallucinating
More Relevant Posts
-
Postgres just got a glow-up. A single Postgres server can execute 43,000 durable workflows per second. That’s 3.7 billion workflows per day. No sharding. No external orchestrator. Just plain Postgres + DBOS. Peter Kraft (DBOS, Inc. CTO) just dropped the benchmark everyone’s been asking for: → Raw writes: 144K/sec (12 billion/day) → Direct durable workflows: 43K/sec → Queued workflows (after partitioning): 30.6K/sec The bottleneck? WAL flushing. Everything else (CPU, IOPS) had headroom. Translation: Adding rock-solid durability to your app basically costs you nothing until you’re at ridiculous scale. If you’ve ever been told “Postgres won’t scale for workflows,” this post just proved them wrong. Full benchmark + open-source code here 👇 https://lnkd.in/dMg6NeZa #Postgres #DBOS #Backend #Scaling #Durability #Databases
To view or add a comment, sign in
-
Marketing data lives in Postgres. Real insights from it usually live in someone else's ticket queue. The pgEdge MCP Server for Postgres connects an LLM directly to your database, so questions like the one in the screenshot - "who are our top purchasing customers in the last 6 months and how much have they ordered?" - get a ranked, filterable answer back in seconds. No ticket. No SQL. Just the data you already own. 📊 What makes it different from the dozens of "AI + database" demos floating around: this one was built by the people who built Postgres itself. Dave Page sits on the Postgres Core Team and created pgAdmin (25+ years contributing upstream), with a group of long-time PostgreSQL contributors designing the MCP server from the database side outward. Read-only transactions by default, TLS, user and token auth, multi-database support, custom tools you can write in SQL or Python, and works with Claude, GPT, or local Ollama models if you need to keep things on-prem. 🐘 Open source under the PostgreSQL License. Runs with any new or existing Postgres 14+ database, including RDS, Supabase, self-hosted, or whatever other flavor of PostgreSQL your team is running. 📤 Send this to your developer team and see what they think: https://hubs.la/Q04f7qYV0 #martech #datadriven #postgres #ai #mcp #aiengineering #technicalmarketer #marketing
To view or add a comment, sign in
-
-
🐘 PgBouncer is great — but it’s not the whole story. If you run PostgreSQL, chances are you’re using PgBouncer for connection pooling. It’s simple, efficient, and does one thing very well. But at some point, you start hitting limitations: - no query routing - no read/write split - no visibility into traffic - limited control beyond pooling That’s exactly why we wrote this post: 👉 moving from PgBouncer to ProxySQL (for PostgreSQL) ProxySQL is not just a pooler. It’s a SQL-aware proxy that can: - route queries based on rules - split reads/writes - multiplex connections - integrate with HA setups - provide observability So the real question becomes: 👉 when is PgBouncer enough, and when do you need more? This post from Rahim Kanji is the first in a series exploring that transition. 📖 https://lnkd.in/g9H3uVuh Curious to hear from PostgreSQL users: are you hitting limits with PgBouncer? or is it still “good enough” for your use case? #PostgreSQL #PgBouncer #ProxySQL #DevOps #SRE #Database #OpenSource
To view or add a comment, sign in
-
After 15 years building systems at scale, I've seen too many teams pick databases based on hype rather than data. So I built this decision matrix using real performance numbers from 10M+ record tests. PostgreSQL absolutely dominates when you need complex queries, transactions, and data integrity. The query optimizer is phenomenal. MongoDB shines for document storage and rapid iteration. Schema flexibility is huge when requirements keep changing. Cassandra wins hands down for high-write scenarios and multi-datacenter deployments. Linear scalability is real. The key insight? There's no universal winner. Your use case determines everything. I've seen companies waste months migrating because they chose based on trends, not requirements. Use this framework before your next database decision. Your future self will thank you. #viral #trending #trend #database #postgresql #mongodb #cassandra #backend #scalability #performance #systemdesign #engineering #distributed #tech #data #architecture #developer #programming
To view or add a comment, sign in
-
-
Multi-master replication in Postgres sounds great until you hit the operational complexity. This is a solid attempt at abstracting that away and making it usable in real systems. Definitely worth checking out if you care about scaling beyond a single writer.
PostgreSQL has natively supported logical replication since version 10. But while those foundational primitives have been there for years, actually configuring a true, active-active multi-master setup has remained painfully manual. Once you choose to move beyond the single-writer bottleneck, you immediately encounter the challenge of full-mesh topology. You have to manually configure, sync, and handle conflict resolution across an increasingly fragile web of database nodes. Reading through the OpenAI blog released in January 2026 reinforced my thoughts on the complexity of multi-master systems and why many teams avoid them. Before that blog release, I had been tinkering with a simple multi-writer system on Docker Desktop from my personal computer. The result of that experimentation led me to build pgconverge. Pgconverge is an open-source CLI tool designed to automate multi-master logical replication in Postgres. It abstracts away the heavy lifting of node synchronisation and the dreaded n(n-1) complexity so you can focus on scaling your infrastructure, not writing custom replication scripts. I have documented what I learned while trying to build Pgconverge into a 7-part series. I have released the first two articles and will be rolling out the remaining five over the coming days. GitHub: https://lnkd.in/e74jx7hu Why Multi-Master? The Problem with Single-Writer Databases: https://lnkd.in/esavkuhu Inside Pgconverge: Navigating the N×(N-1) Complexity of Full Mesh Replication: https://lnkd.in/eSDNMfrp You can also read OpenAI’s blog on how they scaled a single-writer PostgreSQL database to power ChatGPT at massive scale : https://lnkd.in/ew2U58Ct I would love for fellow infrastructure and backend engineers to break it, test it, and share feedback. #PostgreSQL #DistributedSystems #DatabaseArchitecture #BackendEngineering #SystemDesign
To view or add a comment, sign in
-
-
I built PulseOps in 4 days using Aiven's free tier. Here's the full demo. What it does: Every push, deploy, PR, and incident streams through one pipeline — Aiven Kafka → PostgreSQL + Valkey + OpenSearch → live dashboard in under 1 second. DORA metrics. Real-time event stream. Fuzzy search. Incident↔deploy correlation. Team leaderboard. All for $0/month. What I learned from each service: 📨 Kafka — I expected this to be the hard part. It wasn't. Download 3 SSL certs from the Aiven console, set 3 env vars, and events were flowing in 10 minutes. What you can't replace it with: if your database goes down, Kafka still holds every event. Replayable. Zero loss. 🐘 PostgreSQL — DORA metrics as SQL views. One view, five repos, four KPIs, zero application code doing the math. The elegance of relational databases for structured analytics is hard to beat. ⚡ Valkey — The live dashboard is just PUBLISH and SUBSCRIBE. Event hits Kafka → consumer publishes to Valkey channel → API subscribes → SSE to browser. Sub-millisecond. No polling, ever. 🔍 OpenSearch — Type "deplymnt" and it finds "deployment" in 12ms across 2,841 events. Fuzzy matching, highlighted results, repo aggregations. This is what a dedicated search engine gives you that SQL LIKE never will. The honest conclusion: I'm one person. I cannot run Kafka, a replicated Postgres cluster, a Redis-compatible cache, and OpenSearch on my own. Aiven made all four available in under an hour — that's what made this project possible. 🔗 Open source: https://lnkd.in/dxkEH3_6 Aiven Hugh Evans Reza Lesmana Francesco Tisiot Oskari Saarenmaa Stanislav Dmitriev David Kunz Andrei Trepet Cassio Sampaio #AivenFreeTier #DevOps #Kafka #PostgreSQL #OpenSearch #Valkey #DORA #BuildInPublic #SRE #PlatformEngineering
To view or add a comment, sign in
-
Pop quiz: what happens when Postgres runs out of transaction IDs? A) Queries slow down B) You get a warning in the logs C) The entire database goes read-only and your application stops working The answer is C. And it's not a theoretical edge case. It's a countdown that has ticked on every Postgres database that isn't vacuuming properly. Welcome to Week 2 of April Data Drops and its our very own birthday gal Chandini kurada talking about VACUUM. Here's the short version: Postgres uses a multi-version concurrency model. When you update or delete a row, the old version sticks around. These dead tuple versions pile up. VACUUM is the process that cleans them out. Autovacuum does this automatically. In theory. In practice, autovacuum's defaults are tuned for politeness, not performance. On big, busy tables, it falls behind. Dead tuples stack up. Tables bloat. Performance degrades gradually — so gradually you don't notice until it's a crisis. And if VACUUM falls far enough behind, Postgres starts running out of usable transaction IDs. When the counter gets close to wrapping around, Postgres does the only safe thing it can: It shuts down all writes. Completely. We saw a SaaS company 48 hours away from this exact scenario. A 200GB table. Autovacuum falling behind for weeks. Nobody noticed until the warning showed up in the logs. 48 hours from total write shutdown on a production database. Today's video: → How VACUUM and autovacuum actually work under the hood → Queries to check if your tables are falling behind right now → Which autovacuum settings to tune (and what to set them to) → The wraparound doomsday clock and how to keep it far from midnight VACUUM is not optional. It's oxygen. #AprilDataDrops #PostgreSQL #DataDrop8 #VACUUM #Database #DevOps #OpenSourceDB
To view or add a comment, sign in
-
Shoutout to Chandini kurada from OpenSource DB for breaking down a critical topic that often gets ignored until it’s too late. Pop quiz: what happens when Postgres runs out of transaction IDs? A) Queries slow down B) You get a warning in the logs C) The entire database goes read-only and your application stops working The answer is C. And it’s not theoretical — it’s a ticking clock in every system where VACUUM isn’t keeping up. Most teams rely on autovacuum and assume it will handle everything. But in real-world workloads, it often falls behind. Dead tuples build up. Tables bloat. Performance drops — slowly, then suddenly. And if it falls too far behind? Postgres runs out of usable transaction IDs → and shuts down all writes. Completely. In this Data Drop, Chandini covers: → How VACUUM and autovacuum work under the hood → How to check if your tables are falling behind → What to tune in autovacuum → How to avoid transaction ID wraparound VACUUM is not optional. It’s oxygen. We are happy to share this #AprilDataDrops initiative of our supporting partner OpenSource DB — one PostgreSQL video, every day this April. Check all Data Drop videos here: https://lnkd.in/gqidmmQp Aarti NR | Kalyani M | Keerthi Seetha | Praveena Sivasankar #PostgresWomenIndia #AprilDataDrops #PostgreSQL #DataDrop8 #VACUUM #autovacuum #performance #WomenInTech
Pop quiz: what happens when Postgres runs out of transaction IDs? A) Queries slow down B) You get a warning in the logs C) The entire database goes read-only and your application stops working The answer is C. And it's not a theoretical edge case. It's a countdown that has ticked on every Postgres database that isn't vacuuming properly. Welcome to Week 2 of April Data Drops and its our very own birthday gal Chandini kurada talking about VACUUM. Here's the short version: Postgres uses a multi-version concurrency model. When you update or delete a row, the old version sticks around. These dead tuple versions pile up. VACUUM is the process that cleans them out. Autovacuum does this automatically. In theory. In practice, autovacuum's defaults are tuned for politeness, not performance. On big, busy tables, it falls behind. Dead tuples stack up. Tables bloat. Performance degrades gradually — so gradually you don't notice until it's a crisis. And if VACUUM falls far enough behind, Postgres starts running out of usable transaction IDs. When the counter gets close to wrapping around, Postgres does the only safe thing it can: It shuts down all writes. Completely. We saw a SaaS company 48 hours away from this exact scenario. A 200GB table. Autovacuum falling behind for weeks. Nobody noticed until the warning showed up in the logs. 48 hours from total write shutdown on a production database. Today's video: → How VACUUM and autovacuum actually work under the hood → Queries to check if your tables are falling behind right now → Which autovacuum settings to tune (and what to set them to) → The wraparound doomsday clock and how to keep it far from midnight VACUUM is not optional. It's oxygen. #AprilDataDrops #PostgreSQL #DataDrop8 #VACUUM #Database #DevOps #OpenSourceDB
To view or add a comment, sign in
-
I tried running MySQL with a Deployment. It worked. Until the pod restarted. New name. New IP. Database had no idea who it was. 😅 That's why StatefulSets exist. 🧵 Deployment = Actors → any actor plays the role StatefulSet = Named Employees → mysql-0, mysql-1 ALWAYS 💥 Key difference: → Deployment pods get random names on restart → StatefulSet pods keep fixed names FOREVER ━━━━━━━━━━━━━━━━ 💥 When to use StatefulSet: 💥 MySQL/PostgreSQL cluster → needs stable pod identity 💥 Kafka brokers → need fixed broker IDs to work 💥 Elasticsearch → needs stable names to form a cluster 💥 MongoDB replica set → each node must know its peers ━━━━━━━━━━━━━━━━ Post #12 of 30 — K8s Mastery 🚀 ✅ Post #10 → Ingress ✅ Post #11 → PersistentVolumes ✅ Post #12 → StatefulSets (You are here) 🔜 Post #13 → DaemonSets! Are you running any databases on Kubernetes? Drop which DB below 👇 🔖 Save · ♻️ Repost · 👆 Follow for Post #13! #Kubernetes #DevOps #K8s #CloudNative #Docker #SRE #PlatformEngineering #DevOpsEngineer #CloudComputing #AWS #EKS #LearnDevOps #TechCareer #100DaysOfCode #KubernetesBasics #OpenSource #MySQL #Kafka #StatefulSets #Database #BackendDeveloper #SoftwareEngineering #Infrastructure
To view or add a comment, sign in
-
-
🚀 I just built a MongoDB CRUD + Aggregation Pipeline Interface using Streamlit! I’m excited to share my latest project – a no‑code, visual tool that makes interacting with MongoDB Atlas super easy. Whether you're a developer, data analyst, or just curious about MongoDB, this app lets you: ✅ Manage databases & collections – create, list, drop ✅ Perform full CRUD – insert, fetch, update, delete documents ✅ Handle indexes & schema validation – create/drop indexes, modify JSON Schema ✅ Run 12+ predefined aggregation pipelines – group, join, project, add fields, budget categorisation, and more ✅ Execute your own custom pipelines – on any database and collection you choose All with a clean Streamlit UI and beautiful JSON output powered by the Rich library. 🔗 Live app: https://lnkd.in/g5HV2jJQ 📦 Tech stack: Python, Streamlit, PyMongo, MongoDB Atlas, Rich 📌 Note: Please do not alter, modify, or delete any databases, collections, or data from this app. Use it for fun and learning only – the sample data is there to explore. Thank you! 🙏 I’d love to hear your feedback and ideas for improvements. Feel free to try it out and let me know what you think! #MongoDB #Streamlit #Python #AggregationPipelines #CRUD #DatabaseTools #OpenSource
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development