I just published my first npm package — dotenv-audit Ever had your app crash in production because someone forgot to set an environment variable? I built a tool that solves this. It scans your actual codebase, finds every process.env usage, and tells you exactly what's missing — with file paths and line numbers. No schema to write. No config needed. Just run: npx dotenv-audit --ask What it does: Scans .js, .ts, .jsx, .tsx, .vue, .svelte files automatically Detects all patterns — dot access, bracket access, destructuring, Vite env Generates .env files with smart placeholder values Auto-detects your database (MongoDB, PostgreSQL, MySQL) from package.json Supports monorepos — creates separate .env per service Filters out framework built-ins (Vite's DEV, MODE, etc.) Zero dependencies. 15KB package size. Interactive mode asks you step by step: Want an ENV_SETUP.md with all missing variables? ✓ Want a .env file generated with smart defaults? ✓ Check it out: https://lnkd.in/gqRAPjGN Would love to hear your feedback! #nodejs #npm #javascript #typescript #opensource #webdevelopment #developer #dotenv
Prevent Dev Env Issues with Dotenv Audit
More Relevant Posts
-
🌐 Master Server-State: TanStack React Query vs. Redux 🔍 What is TanStack React Query? TanStack Query (formerly React Query) is a Server-State library. It’s designed specifically to manage asynchronous data—fetching, caching, synchronizing, and updating state that lives on a server. 📡✨ It automates the "boring" parts of development: Automatic Caching: No more manual loading spinners on every click. 🏎️ Background Refetching: Keeps your data fresh while the user stays on the page. 🔄 Error Handling: Built-in retry logic and error states. 🛠️ ⚖️ How is it different from Redux? Redux is for Client-State: It manages data that lives only in your app (like a sidebar being open, a dark mode toggle, or a multi-step form). It is highly predictable but requires lots of "boilerplate" (actions, reducers, thunks). 🧠 TanStack Query is for Server-State: It manages data that comes from an API. It replaces 50 lines of Redux boilerplate with a single, powerful hook. ⚡ 🏥 Real-Life Example: The "Library vs. Personal Notepad" 📚 Imagine you are researching a topic: Redux (Personal Notepad): You write down every single fact yourself. If a fact changes at the source, you have to manually cross it out and rewrite it. If you lose your notepad, you have nothing. 📝 TanStack Query (The Librarian): You ask the librarian for a book. They give it to you immediately if it’s on the shelf (Caching). If it’s old, they go get a new version while you keep reading (Background Update). If the book is missing, they try again automatically (Retries). 👩🏫✅ #ReactJS #TanStackQuery #Redux #WebDevelopment #FrontendArchitecture #JavaScript #StateManagement
To view or add a comment, sign in
-
-
I just launched my first open-source CLI tool. 🙌 create-samrose-app lets you scaffold a full Next.js stack interactively with a single command. The idea came from a frustration I kept running into: every new project starts with the same setup marathon. ORM config, database setup, auth wiring, CI/CD... before you've written a single feature. So I built the tool I always wanted. $ npx create-samrose-app You pick: • ORM (Prisma, Drizzle, TypeORM, Mongoose) • Database (PostgreSQL, MySQL, SQLite, MongoDB) • Auth (NextAuth, Clerk, JWT) • State (Zustand, Redux, Recoil) • API (tRPC, oRPC, GraphQL, REST) • Testing + extras like Docker & GitHub Actions Everything wired together. Everything production-ready. Building this taught me a ton about CLI design, cross-stack compatibility, and open-source documentation. If you try it, let me know what you think — feedback and contributions are very welcome. And if it saves you even 30 minutes, a ⭐ on GitHub would make my day! 🌐 Docs: https://lnkd.in/gqx_EJgE 📦 npm: https://lnkd.in/g93iKfMg 💻 GitHub: https://lnkd.in/gBiB9yMh #OpenSource #NextJS #CLI #BuildInPublic #DevTools #Package
To view or add a comment, sign in
-
-
Ever wondered how databases NEVER leave you with half-written data after a crash? 🤯 Atomicity and Durability aren’t magic — they’re engineered through smart recovery systems. Shadow Copy ≠ Log-Based Recovery Shadow Copy → full database copy before changes, switch pointer on commit Log-Based Recovery → record every change first, then apply/undo using logs When building real systems, you don’t just use shadow copying — you rely on log-based recovery to handle performance, scaling, and concurrent transactions. Shadow copying sounds simple but breaks at scale. Log-based systems (redo/undo, WAL) are what power real-world databases like MySQL and PostgreSQL. This small distinction changes how you design systems. Building systems > memorizing concepts. What’s one concept developers often misunderstand? #fullstackdeveloper #softwareengineering #webdevelopment #javascript #reactjs #backend #buildinpublic #nodejs #nextjs #typescript
To view or add a comment, sign in
-
-
What if I told you that there's a way to significantly improve the performance of your NestJS applications by leveraging a technique called "caching"? Essentially, caching involves storing frequently accessed data in a temporary storage location, so that when the same data is requested again, it can be retrieved quickly from the cache instead of being re-computed or re-fetched from a database. For example, let's say you have an API endpoint that retrieves a list of users from a database. ```javascript // users.service.ts import { Injectable } from '@nestjs/common'; @Injectable() export class UsersService { async getUsers(): Promise<any[]> { // simulate a database query return [ { id: 1, name: 'John Doe' }, { id: 2, name: 'Jane Doe' }, ]; } } ``` By caching the result of this endpoint, you can avoid hitting the database on subsequent requests and improve the overall response time of your application. What caching strategies are you using in your applications to improve performance? 💬 Have questions or working on something similar? DM me — happy to help. #NestJS #NodeJS #Caching #PerformanceOptimization #BackendDevelopment #APIPerformance #SoftwareEngineering #CodingBestPractices #TechnicalDebt
To view or add a comment, sign in
-
5 Node.js mistakes that slow your API (I made all of these in my first 2 years) Most developers blame their server when their API is slow. It's rarely the server. Here are 5 mistakes I see killing Node.js API performance: 1. Blocking the event loop Running heavy sync operations in the main thread freezes everything. Move CPU-heavy tasks to worker threads or a background queue. 2. No database query limits Fetching all records "just in case" will destroy your response time. Always paginate. Always limit. Always project only the fields you need. 3. Skipping compression Not using gzip or Brotli on your responses is free performance left on the table. One middleware line. Huge difference. 4. Creating new DB connections on every request If you're not using a connection pool, you're rebuilding the tunnel every time. Use Mongoose's built-in pooling or pg-pool for PostgreSQL. 5. No caching layer Hitting your database for the same data 1000 times a day. Redis can serve repeat queries in under 1ms. Slow APIs lose users before they even see your product. Which of these have you run into? #nodejs #mernstack #javascript #webdevelopment #backenddevelopment
To view or add a comment, sign in
-
🧠 The Problem: The "Magic" of Active Record Laravel Models don’t have protected/public properties for your database columns. They use the Active Record pattern, meaning fields like $model->one are handled dynamically via the magic __get method. If you try to mock a Model the "standard" way, your mock won't know how to handle those dynamic properties, and your tests will fail. 🛠️ The Solution: Mocking the Magic To test a class that performs logic on Model properties without hitting the database, you need to mock the __get method and provide a callback that mimics the Model's behaviour. 🚀 Why do this? Speed: These tests run in milliseconds because they don't load the Laravel Kernel or touch a database. Isolation: You are testing only your logic, not the framework. Purity: It forces you to write code that doesn't rely on "Eloquent magic" hidden in the background. Pro-Tip: If your logic gets too complex, consider using a Data Transfer Object (DTO) instead of passing the Model directly. It makes your code cleaner and your tests even easier to write! How do you handle Model testing? Do you prefer pure Unit tests or do you stick to Feature tests with an in-memory database? Let's discuss! 👇 #Laravel #PHP #UnitTesting #CleanCode #SoftwareArchitecture #PHPUnit #BackendDevelopment
To view or add a comment, sign in
-
-
Node.js vs. Go HTTP Server 🌐 Leaving Node.js behind: Building a raw HTTP server in Go 🚀💻 After spending the week locking down my PostgreSQL database, it is finally time to build the API server. But moving from Node.js to Go requires a complete mental reset. In Node, you reach for Express.js immediately. You write app.get("/", (req, res) => {}) and you're good to go. In Go? There is no Express. You build it raw using the standard library (net/http). And Go completely flips the script on how you handle data. In Node, it’s always (req, res) — Request first, Response second. In Go, the handler looks like this: func handler(w http.ResponseWriter, r *http.Request) Response first, Request second. Why? Because Go treats the ResponseWriter as a literal tool you are handed to execute your job. The server says: "Here is your pen (w). Now look at the paperwork (r) and write your response back immediately." I'm officially writing my first route to start the Auth sequence (Signup/Login). It’s raw, it’s fast, and there’s no framework magic hiding the fundamentals from me. We move! 💪🏾 To my devs who made the switch from JavaScript/Node.js to Go: What was the hardest habit you had to break? Let’s gist in the comments 👇🏾 #Golang #NodeJS #BackendEngineering #API #SoftwareDevelopment #TechBro #TechInNigeria #WeMove
To view or add a comment, sign in
-
-
One missing index was slowing my app by 100x. The query was clean. The logic was correct. Still… slow. MySQL was scanning 80,000 rows just to find one user by email. One line fixed it: CREATE INDEX idx_email ON users(email); ⚡ 4 seconds → 40 milliseconds Indexing is one of those things developers skip early on. Because everything works fine in development. It always does… when you only have 500 rows. A few things that actually matter: • Use EXPLAIN before touching any slow query • Index columns used in WHERE, JOIN, ORDER BY • Don’t over-index — it slows down writes • Order matters in composite indexes (country, city) ≠ (city, country) I’ve started noticing this pattern a lot while working with real datasets. Most performance issues aren’t bad code. They’re systems that were never built to scale. What’s a small change that made a huge difference in something you built? 👇 #MySQL #BackendDevelopment #PHP #SystemDesign #WebDevelopment
To view or add a comment, sign in
-
-
#ChangeEvent #IfStatements #TryCatchFinally #FormData #AsyncAwait #CustomHooks I've been heads-down in a full-stack project lately, built with #React, #TypeScript, #NodeJS, #ExpressJS, and #PostgreSQL. While building the upload component, I wanted to strip away the friction and make the experience feel automatic. For this task, I implemented a handleChange function that triggers the entire upload process the moment a file is selected. No secondary "Submit" button needed. I learned that managing file uploads requires a specific object called #FormData. Since we can't just send a raw file object as plain text, FormData allows the browser to package the image in a way that the server can actually digest. The benefit of doing this is a much tighter feedback loop. By combining the file selection and the API call into one step, the user sees "Uploading..." immediately. It makes the app feel responsive and modern. The challenges I faced were around Conditional Logic. I had to make sure the code checked that a file exists and that its type was valid before proceeding. If a user tries to upload something that isn't a PNG or JPEG, the if statement blocks the process before a single byte is sent to the server. I overcame them by using #AsyncAwait and a #TryCatchFinally block. By using await with my axios post, I can pause the function until the server confirms success. Then, I use a #CustomHook (refreshGallery from useGallery) to update the UI. Regardless of whether it succeeds or fails, the finally block ensures the loading state is reset so the user can try again.
To view or add a comment, sign in
-
-
Spent 3 hours last week debugging a Node.js API that was timing out under load. It was doing 7 things wrong at once. And I've seen the same 7 things in almost every backend I've audited. Here they are in case yours is quietly suffering too. 1. Blocking the event loop Someone wrote a synchronous file parser inside a route handler. Every request waits for it. All of them. Simultaneously. Node is single-threaded by design which means CPU-heavy work on the main thread freezes everything else. worker_threads exist for this. Use them. 2. New DB connection per request Fine when 3 people use your API. When 300 hit it simultaneously, your database runs out of connections and starts refusing them. pg-pool or mysql2's built-in pool. 10 minutes to set up. Completely worth it. 3. The N+1 problem You fetch 50 users, then loop through and fetch each profile individually. That's 51 queries where 1 would do. Under light traffic nobody notices. Under real load, your DB is on fire. Eager loading or a JOIN. Pick one. 4. No caching I've seen APIs hit the database for dropdown data that changes once a month on every single request. Redis with a sensible TTL eliminates 60–70% of those calls instantly. Some endpoints don't need it. But some of yours do and you probably haven't checked. 5. No gzip compression app.use(require('compression')()) One line. Responses shrink 60–70%. I genuinely don't know why this isn't on by default. 6. Unhandled promise rejections No error. No log. Just a process that crashes at 2am and nobody knows why until a customer complains. Global unhandledRejection handler. Not optional in production. 7. Running on one CPU core Your server has 8 cores. Node uses 1 by default. PM2 cluster mode. One command. Took me longer to type this than to actually set it up. None of these need a rewrite or a new framework. They just slip through when nobody's specifically looking for them. Check these before assuming you need bigger servers. Which one are you guilty of? #NodeJS #JavaScript #BackendDevelopment #API #WebDevelopment #Programming #SoftwareEngineering #Angular
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development