I just published my first npm package called Arc. It is a web framework for Node.js that I built from the ground up using TypeScript. Instead of just using Express, I wanted to see if I could build the whole engine myself using only the native http module and zero external dependencies. I spent a lot of time on the core logic writing a router that uses regex to handle dynamic paths like /user/:id and setting up a recursive next function so middlewares run in the right order. Since I wanted it to be a complete tool, I also built my own custom parsers for JSON, URL encoded data, and cookies. It also has built in logic for JWT auth, CORS, rate limiting, and a static file server. Building this taught me way more about how a backend actually handles data flow than just using a pre made library ever did. I am still adding to it. My next plans are to include request validation, dependency injection, and cluster support to make it even faster. If you are into low level Node.js or want to help build out the engine, you are more than welcome to contribute. If you want to see how the guts of a framework actually work, check out the code or try it out: NPM: https://lnkd.in/dPkTEjYZ GitHub: https://lnkd.in/dfmwM_Cb #NodeJS #TypeScript #OpenSource #Backend #Coding
Introducing Arc: A Node.js Web Framework Built from Scratch with TypeScript
More Relevant Posts
-
Most developers still think of Bun as “just a faster Node.js.” That massively undersells it. Bun is quietly becoming one of the most practical tools in the JavaScript ecosystem because it doesn’t just improve speed, it collapses your stack. One tool can handle: Runtime Package manager Bundler Test runner And the real value starts when you go beyond “bun install”. What stands out to me: Full-stack server setup is ridiculously simple You can serve APIs, HTML, images, and dynamic content without stitching together half the ecosystem. It removes a lot of unnecessary tooling TypeScript transpiling, bundling, hot reload, WebSockets, file parsing, and even SQLite support feel much more native. Backend workflows are actually pleasant Built-in support for Redis, SQL, S3-compatible storage, cookies, UUIDs, and fetch means you can move from idea to working backend very quickly. It’s surprisingly low-level when needed TCP, UDP, DNS, FFI, and even runtime C compilation give Bun a range that most JavaScript tools don’t even try to offer. Shipping is simpler Fast builds, standalone binaries, and a built-in test runner make it feel more complete than most “modern JS” setups. Big takeaway: Bun is not interesting just because it’s faster. It’s interesting because it reduces the number of decisions you need to make to build and ship something real. That’s a much bigger advantage than benchmarks. #Bun #JavaScript #TypeScript #WebDevelopment #BackendDevelopment
To view or add a comment, sign in
-
Why my API was slow (and what actually fixed it) I recently noticed one of my APIs was taking way too long to respond — sometimes 3–4 seconds per request. At first, I thought it was just my code being “messy,” but digging deeper taught me a lot. Here’s what I found: Too many unnecessary DB calls – I was fetching the same data multiple times instead of reusing it. Unoptimized queries – Some queries were scanning entire collections instead of using indexes. Synchronous loops – I was waiting for each call to finish one by one, instead of running them in parallel. After making a few changes: Added proper indexes Used Promise.all for parallel DB calls Cached repeated data where possible Response time went from 3–4 seconds → under 300ms. The biggest takeaway? Sometimes it’s not your code logic, it’s how your code talks to the database and handles tasks. Small adjustments can make a huge difference. #FullStackDeveloper #WebDevelopment #APIDevelopment #BackendDevelopment #NestJS #NextJS #JavaScript #PerformanceOptimization #SoftwareDevelopment
To view or add a comment, sign in
-
-
#Day22 Yesterday, my code learned how to talk to my computer. Today, it learned how to flow. Working with streams in Node.js changed how I think about handling data. Before this, reading a file meant loading everything into memory at once. Simple… until the file isn’t small anymore. Then I discovered streams. => ReadStream: Instead of swallowing the whole file, it reads in chunks. Like sipping, not gulping. => WriteStream: Outputs data piece by piece, perfect for logs or large files. => Pipe: This one clicked instantly. Connect a read stream to a write stream, and data just flows automatically. No manual handling, no stress. It feels less like executing code… and more like building a system where data moves. The biggest shift? I’m no longer thinking in terms of “files”, I’m thinking in terms of flow and efficiency. Small change in concept. Massive difference in scalability. Same language. Smarter systems. #NodeJS #JavaScript #BackendDevelopment #LearningToCode #M4ACELearningChallenge
To view or add a comment, sign in
-
-
I removed Express from my Node.js project. Then removed the http module too. Built everything from raw TCP and finally understood what was actually happening. Three things that clicked: → request.body is a stream, not a property Node reads your request in chunks. That’s why even very large uploads don’t crash your server. → GET, POST, and PUT are not interchangeable Send the same POST twice, two records get created. Send PUT twice, nothing changes. That difference has a name: idempotency. → Postman is just a GUI Every button maps to three things: method, headers, and body. Wrote a 4-part breakdown. #NodeJS #JavaScript #BackendDevelopment #medium Read the full article here: https://lnkd.in/dWQRp7Ta
To view or add a comment, sign in
-
🎥 **New Video: HTTP Parser in Node.js — Technical Deep-Dive** I just published a deep dive I did with Paolo Insogna (Platformatic) and Artem Zakharchenko (creator of Mock Service Worker/MSW) on how HTTP parsing actually works in Node.js. Artem was researching socket-level interception for MSW v3 and ended up going down a rabbit hole that led to this conversation. Fair warning: the answers don't get simpler the deeper you go. **What we covered:** 🔹 The original `http-parser` C library — "totally unmaintainable" according to Paolo. Any change = recompile. Rigid, static, impossible to configure. 🔹 The migration to llhttp — a state-machine-based, generated parser that's configurable and 2x faster (1.5M → 3M req/s). 🔹 The l build pipeline: TypeScript → llparse (code generator) → C code → compiled binary. Tests in Markdown. 🔹 Parser pools — Node.js pre-allocates 1000 parsers and keeps them in a free list instead of GC-ing them. Why? Minimizing C++ ↔ JavaScript boundary crossing. Is it still needed with modern GC? *"I have no real hard data on that."* 🔹 Why Undici migrated from Node.js bindings to llhttp as WebAssembly — WASM boundary crossing is faster, and it's portable across runtimes. 🔹 Why you should NEVER use `process.binding('http_parser')` — no TypeScript types, no public API, changes without notice. Just use Undici. 🔹 And yes, we explain why internal symbols have a `k` prefix (Hungarian Notation, Windows API 90s, nobody knows why it stuck, it just did). Watch the full video 👇 👉 https://lnkd.in/gk7AMNYU I love doing this video, it's the kind of deep dive. Hopefully, it would make you appreciate just how much work goes into making `fetch()` feel effortless. What do you want us to cover next?
To view or add a comment, sign in
-
I just shipped a CLI tool that saves developers hours of setup time. It's called better-ts-stack. Run one command. Answer a few prompts. Get a production-ready TypeScript project — fully wired. Here's what it scaffolds for you: → Backend API with Express + TypeScript → Full-stack app with Next.js 16 + React 19 → Database: PostgreSQL or MongoDB → ORM: Prisma or Drizzle or Mongoose → Auth: JWT (backend) or Better Auth (full-stack) → Docker setup, ESLint, Prettier, .env — all included No more copy-pasting boilerplate from old projects. No more spending the first 2 hours of a new project just "setting things up." Just run: npx better-ts-stack And you're building within minutes. NestJS support is coming next. Would you use a CLI like this in your workflow? 👇 Live link : https://lnkd.in/ddfFKZ_R Github : https://lnkd.in/d4c9ZetQ #BuildInPublic #TypeScript #NextJS #FullStackDeveloper #OpenSource
To view or add a comment, sign in
-
-
Following a discussion yesterday on the Frankenstein pattern. This is how I’ve been approaching it in practice. Develop your features outside your system as a self-contained environment , easy to work with, easy to reason about. Embed them back in with risks contained and testability less complex. This approach was initially built for PHP-based systems. Systems where adding a feature often means navigating templates, plugins, observers, and side effects. But something interesting happens when the feature is truly isolated. The same feature can be compiled differently. - As an embeddable bundle for PHP platforms - As a native component for modern environments like Next.js Same logic. Same behaviour. Different packaging. You are no longer building for a system. You are building around it. A concrete setup can look like this: - React / Vite / TypeScript for isolated frontend capabilities - Cloudflare Workers to cache and stabilise the GraphQL layer - API backends using PHP, Node, Python, or Go — depending on context If the feature fails, the system continues. For me, this is where the Strangler pattern becomes tangible. Not a rewrite. Not a migration plan. A series of isolated capabilities that gradually take responsibility. Curious how others are addressing this in practice — especially in larger organisations where the pressure to add “just one more feature” to the core is constant.
To view or add a comment, sign in
-
-
🧩 Demystifying the Node.js Event Loop: It's Not Just One Thread! Ever wondered what actually happens when you call setTimeout(() => {}, 1000)? Most people say "Node is single-threaded," but that’s only half the story. Here is the visual breakdown of how Node.js orchestrates asynchronous magic using libuv: 1. The Handoff (Main Thread) When you set a timer, the Main JS thread (V8) doesn't wait. It registers the callback and duration with libuv and moves on to the next line of code. 2. The Engine Room (libuv) This is where the heavy lifting happens. libuv maintains a Min-Heap—a highly efficient data structure that sorts timers by their expiration time. It puts the thread to "sleep" using OS-level polling (like epoll or kqueue) until the nearest timer is ready. 3. The Queue & The Tick Once the time arrives, libuv moves your callback into the Callback Queue. But it doesn't run yet! The Event Loop must cycle back to the "Timers Phase" to pick it up. ⚠️ The "Golden Rule" of Node.js Don't block the loop. If you run a heavy synchronous operation (like a massive while loop), the Event Loop gets stuck. Even if your timer has expired in the background, the Main Thread is too busy to check the queue. This is why a setTimeout(cb, 100) might actually take 5 seconds to fire if your main thread is congested. Key Takeaway: Node.js is fast because it offloads waiting to the OS via libuv, keeping the main thread free for execution. Keep your synchronous tasks light, and let the loop do its job! 🌀 #NodeJS #WebDevelopment #SoftwareEngineering #Backend #Javascript #ProgrammingTips
To view or add a comment, sign in
-
🚀 Day 11 of 21 Days of Explaining Tech Ever wondered how websites and apps actually communicate behind the scenes? It all comes down to one powerful, lightweight format — JSON (JavaScript Object Notation). From logging in 🔐 to scrolling through posts 📱, JSON is constantly working in the background, enabling smooth data exchange between clients and servers. 💡 Why JSON matters: • Simple and human-readable • Lightweight and fast • Industry standard for APIs and web communication Think of JSON as a neatly organized data box 📦 where information is stored in key-value pairs — clean, structured, and efficient. 🎥 Check out the quick 60-second breakdown here: https://lnkd.in/dvQrhfPQ If you're starting your journey in web development, understanding JSON is not optional — it’s foundational. #WebDevelopment #JSON #JavaScript #APIs #Coding #TechExplained #Developers #LearnToCode #FullStack #21DaysOfCode #nikhil
To view or add a comment, sign in
-
Subject - Common mistake while using fetch in JavaScript Many beginners (including me) try to do this: const data = await fetch('https://lnkd.in/gNBBq58S'); const json = await JSON.stringify(data); 🚫 This is wrong because fetch() returns a Response object, not actual JSON data. ✅ Correct approach: const data = await fetch('https://lnkd.in/gNBBq58S'); const json = await data.json(); ✔️ Lesson: Always use .json() to extract data from the response. Small mistake, but important for real-world projects. #javascript #webdevelopment #frontend #coding #learninpublic
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development