🎩 Hat Store - Full-Stack Inventory Management App
Built a complete CRUD application for managing a hat store inventory using Node.js, Express, PostgreSQL, and EJS.
🔗 Live Demo: https://lnkd.in/duP45keG
✨ Features:
- Full CRUD operations for items and categories
- PostgreSQL database with relational data modeling
- Deployed on Render with live database
🔧 Tech Stack: Node.js | Express | PostgreSQL | EJS | Render
This project helped me strengthen my understanding of backend development, database design, and deployment workflows.
💻 GitHub: https://lnkd.in/dR2_kKGF
Note: First load may take ~30s due to free tier cold start.
#WebDevelopment#FullStack#NodeJS#PostgreSQL#CodingJourney
Every night at midnight, a server died. Not dramatically. Not with a warning. Just… gone. OOM-killed by the operating system. The database team would get paged, restart Postgres, go back to sleep, and do it all again tomorrow.
For weeks. The app was a Django backend. Standard setup. Growing user base. Traffic peaked around 11 PM. And every night at peak, the server ran out of memory and dropped dead.
The team blamed the queries. Optimized the slowest ones. Still crashed.
Blamed the server. Upgraded to more RAM. Bought them a few days. Still crashed.
Blamed Django. Considered rewriting in Go. (Please don't.)
Nobody looked at the connection count. 300 direct connections to Postgres. Every single web request opening its own connection and holding it for the entire request lifecycle. Most of them idle. All of them eating memory.
300 connections × ~8MB each = 2.4GB of RAM doing absolutely nothing useful.
The fix took an afternoon: PgBouncer. A lightweight connection pooler that sits between the app and Postgres. 300 app connections funneled into 20 database connections.
Midnight crashes? Gone. Permanently.
That's Data Drop #5.
If your Postgres config has max_connections set to anything above 300 and you're NOT running a connection pooler, this video is for you.
We cover:
→ Why more connections = worse performance (not better)
→ PgBouncer setup, transaction vs session mode, and the gotchas
→ How to right-size your actual connection needs
#AprilDataDrops#PostgreSQL#DataDrop5#PgBouncer#ConnectionPooling#DevOps#OpenSourceDBOpenSource DBShashidharReddy Dakuri
A slow app is often blamed on the frontend.
But sometimes…
The real problem is the database.
Here’s what I’ve seen cause issues 👇
❌ Missing indexes
❌ Overusing joins
❌ Poor data relationships
❌ Fetching more data than needed
Everything works…
Until traffic grows.
Then suddenly:
APIs slow down
Pages lag
Small queries become expensive
💡 What I’ve learned:
Good database design is not just about storing data.
It’s about:
👉 How data is accessed
👉 How queries scale
👉 How the system behaves under load
Because performance problems often start long before users notice them.
Have you ever traced a “frontend issue” back to the database?
#PostgreSQL#Backend#Database#SoftwareEngineering#Developers
A developer just benchmarked file storage vs. SQLite and the results should make you question every default you've ever set.
For 1M records, a Rust in-memory map hit ~169k requests per second. Go hit ~98k. Bun hit ~105k. SQLite? ~25k. A 6x read performance gap. The benchmark is up on HN with 272 points and 287 comments and the thread is ablaze.
Here's the argument that matters. Every database is just files and a process in front of those files. SQLite is a single file with a process on top. PostgreSQL is a directory of files with a process in front. Your code reads and writes files just like databases do. The question is whether you need the process layer or whether flat files with an in-memory index would do the job.
The benchmark does not lie. For reads, flat files with an in-memory map crush SQLite. If you're building an early-stage app and your primary operation is reading data, you might be paying for infrastructure you don't need.
But here is the catch that the hype misses. The simplicity only holds if you run single-process. The moment you add concurrent writes from multiple workers, which is how most real apps work, flat files create architectural complexity that kills the simplicity argument. Multiple processes reading and writing the same files without a process layer managing consistency? That's a problem you will solve with bugs and race conditions.
For agency operators and solo developers: the answer is probably SQLite plus in-memory cache. You get database reliability and consistency guarantees, with read performance that rivals any custom file-based solution. You don't have to choose between simplicity and correctness.
The practical takeaway: before you default to PostgreSQL for your next side project, ask what you actually need. Most apps are smaller than developers assume. The default database is not a law of physics. It is a convention. And conventions are meant to be questioned when the evidence says otherwise.
What are you defaulting to that you might not need?
#SQLite#Database#PostgreSQL#Backend#DeveloperTools#StartupLife#AgencyLife#SmallBusiness#SoftwareEngineering#TechStrategy#BuildInPublic#WebDev#Programming#Coding#Architecture#AppDevelopment#MVP#EarlyStage#Engineering#Performance
𝗧𝗵𝗶𝘀 𝗜𝘀 𝗔 𝗠𝗼𝘃𝗶𝗲 𝗪𝗮𝘁𝗰𝗵𝗹𝗶𝘀𝘁 𝗔𝗽𝗽
You can build a movie watchlist app with Node.js, TypeScript, and MongoDB.
- Use MongoDB for storing data
- Use Node.js and TypeScript for the backend
- Create a REST API for client interactions
Here's how you can get started:
- Set up a MongoDB Atlas instance
- Install Node.js and TypeScript
- Create a new project with Express Framework and MongoDB Node.js Driver
- Define your database schema and create a watchlist collection
- Implement CRUD operations for your watchlist
- Use MongoDB Search for full-text search functionality
You can test your API endpoints with curl commands.
- Test the health check endpoint
- Search for movies
- Add movies to your watchlist
- Rate movies in your watchlist
- Remove ratings and soft-delete watchlist items
This project is a good starting point, but you'll need to add more features and error handling for production.
- Add data validation and error handling
- Implement CORS policy
- Consider adding a frontend or more watchlist features
- Try MongoDB Vector Search for movie recommendations
Source: https://lnkd.in/gua5SsyN
⚡ Your Rails app doesn't need 50 B-tree indexes. It needs the right one.
📊 I just published a deep-dive on PostgreSQL's 9 index types, EXPLAIN ANALYZE & Rails migration patterns you'll actually use.
𝗗𝗶𝗱 𝗬𝗼𝘂 𝗞𝗻𝗼𝘄?
PostgreSQL has a default connection limit of 100. Sounds like a lot, right? It's not.
Every connection in Postgres spawns a separate backend process. Each one takes up memory — roughly 5-10 MB. So 100 connections = up to 1 GB of RAM just for connections, before you even run a query.
Now imagine you've got an app with 20 pods, each opening 10 connections. That's 200 connections. Postgres says no, and your app starts throwing "too many connections" errors.
The fix isn't increasing max_connections. That just eats more memory and makes things slower.
Instead use a 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗽𝗼𝗼𝗹𝗲𝗿 like 𝗣𝗴𝗕𝗼𝘂𝗻𝗰𝗲𝗿.
𝗣𝗴𝗕𝗼𝘂𝗻𝗰𝗲𝗿 sits between your app and Postgres. Your app opens 200 connections to PgBouncer, but PgBouncer only keeps maybe 20 actual connections to Postgres and reuses them.
Your app thinks it has a dedicated connection. Postgres only sees 20. Everyone's happy.
Quick check for your current connections:
𝗦𝗘𝗟𝗘𝗖𝗧 𝗰𝗼𝘂𝗻𝘁(*) 𝗙𝗥𝗢𝗠 𝗽𝗴_𝘀𝘁𝗮𝘁_𝗮𝗰𝘁𝗶𝘃𝗶𝘁𝘆;
If that number is close to your max_connections, it's time for a pooler.
#PostgreSQL#Database#PgBouncer#DBA#DevOps#LearningInPublic
I wrote a small deep dive on how BookMySeat handles one of the most common backend problems: multiple users trying to book the same seat at the same time.
The post explains:
- why unsafe check-then-act logic can double-book a seat
- why a transaction alone is not always enough
- how PostgreSQL row locks / SELECT ... FOR UPDATE help
- where database constraints still matter as a final guardrail
Live: https://lnkd.in/gu9NA7_H
Blog: https://lnkd.in/guHvhBhG
Source code: https://lnkd.in/gE85spvN
This was a fun project because the race condition is not hidden in theory. You can actually trigger it, see the anomaly, then compare it with the safe path.
#BackendEngineering#PostgreSQL#Concurrency#NodeJS#SystemDesign#WebDevelopment
Almost every modern web application will need a REST API for a client to talk to, and in almost every scenario, that client is going to expect JSON. The best developer experience is a stack where you can stay in JSON-shaped data end to end, without awkward transformations in the middle.
In this tutorial, we'll see how to build a small recipe collection API using TypeScript and MongoDB. We'll explore a few different schema design opportunities and make use of MongoDB Search for full-text search across ingredients and instructions.
Read it here 👉 https://lnkd.in/ewNWuqpa
Day 1 of building a Finance Dashboard Backend...
Started working on a backend assignment today and honestly it felt great to see things actually working.
Here's what I built on Day 1:
✅ Set up Node.js + Express server
✅ Connected MongoDB Atlas database
✅ Built User model with role based access (admin, analyst, viewer)
✅ Built Authentication system (register + login with JWT tokens)
✅ Built Transaction model and full CRUD APIs
✅ Added middleware for auth protection and role checking
Also learned that order matters in Express... if you put your middleware in the wrong order, req.body is undefined and you spend 20 minutes debugging..
GitHub: https://lnkd.in/giNQvvKX#nodejs#expressjs#mongodb#backend#buildinpublic#100daysofcode#javascript
Almost every modern web application will need a REST API for a client to talk to, and in almost every scenario, that client is going to expect JSON. The best developer experience is a stack where you can stay in JSON-shaped data end to end, without awkward transformations in the middle.
In this tutorial, we'll see how to build a small recipe collection API using TypeScript and MongoDB. We'll explore a few different schema design opportunities and make use of MongoDB Search for full-text search across ingredients and instructions.
Read it here 👉 https://lnkd.in/eNPyy3iv
Keep It Up 😊❤