Building a browser-based strategy game is essentially a masterclass in frontend state management. Today’s focus on the "Siege of Eger" engine: creating a seamless, type-safe data pipeline from a Supabase backend to an Angular 21 frontend. 🏰 Here is a breakdown of today's architecture evolution: 🔹 Strict Full-Stack Type Safety (Zod) When bridging PostgreSQL and TypeScript, data types like timestamptz can cause silent bugs if not handled correctly. By using Zod to parse the backend DTOs, the raw DB timestamp string is safely transformed into a JavaScript Date object before it ever touches the game logic. If the schema fails, the app catches it immediately. 🔹 Reactive Fetching with httpResource I migrated the data layer away from raw fetch Promises to Angular 19/21's native httpResource. 💡 Why it’s great: It automatically exposes .value(), .isLoading(), and .error() as Signals. This completely eliminates manual loading state boilerplate, handles memory cleanup automatically, and makes building polished UI transitions trivial. 🔹 The Client-Side Game Loop (NgRx SignalStore & RxJS) To make resources "generate" in real-time, you can't ping the database every second. 💡 The Solution: The server acts as the source of truth (saving a timestamp for offline progress), while the local NgRx SignalStore runs an RxJS interval to optimistically calculate the "delta time" and update the UI tick-by-tick. Moving a codebase from a "working prototype" to a "scalable, reactive architecture" is where the real fun begins. What is your go-to pattern for managing high-frequency, real-time state updates in modern frontend frameworks? Let me know below! 👇 #Angular #TypeScript #WebDevelopment #SoftwareArchitecture #RxJS #NgRx #Frontend #Fullstack #Coding #Programming
Viktor Berczeli’s Post
More Relevant Posts
-
🛑 𝗦𝘁𝗼𝗽 𝗰𝗮𝗹𝗰𝘂𝗹𝗮𝘁𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝘀𝘁𝗮𝘁𝗲 𝗼𝗻 𝘁𝗵𝗲 𝗰𝗹𝗶𝗲𝗻𝘁 𝘀𝗶𝗱𝗲! Today on my full-stack project, Siege of Eger (Angular 21 + NestJS + PostgreSQL), I started building the core "Daily Progression" game loop. With 31 in-game days to prepare for a siege, managing worker placements and resources is everything. Instead of jumping straight into writing API endpoints or UI components, I spent my time entirely in the 𝗦𝗵𝗮𝗿𝗲𝗱 𝗠𝗼𝗻𝗼𝗿𝗲𝗽𝗼 𝗪𝗼𝗿𝗸𝘀𝗽𝗮𝗰𝗲. 🧠 𝗛𝗲𝗿𝗲 𝗶𝘀 𝘁𝗵𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗽𝗹𝗮𝘆: By defining my Game Phases and Task Enums using Zod in a shared library, I established a strict Single Source of Truth for the entire stack. What does this mean in practice? 1️⃣ 𝗙𝗮𝗶𝗹-𝗙𝗮𝘀𝘁 𝗕𝗮𝗰𝗸𝗲𝗻𝗱: My NestJS controllers use Zod validation pipes. If a bad payload comes in, it’s rejected before it ever touches my business logic. 2️⃣ 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗦𝗲𝗻𝘀𝗲 𝗘𝘃𝗲𝗿𝘆𝘄𝗵𝗲𝗿𝗲: My Angular Signals instantly know the exact shape of the API. No typos, no guessing TaskType.FORAGE vs TaskType.FORAGING. 3️⃣ 𝗦𝗲𝗿𝘃𝗲𝗿 𝗔𝘂𝘁𝗵𝗼𝗿𝗶𝘁𝘆: The client only sends actions. The NestJS server acts as the undeniable rule engine, calculating resource yields and preventing client-side state manipulation. Building a vertical slice like this makes refactoring a breeze. If the database schema shifts, the shared types catch the errors at compile time across the whole repo. 🛡️ 👇 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝘁𝗵𝗲 More Experienced ones: When building heavily stateful applications, where do you draw the line between optimistic client-side UI updates and strict server-side authority? Do you let the UI guess the result to feel snappy, or wait for the server's absolute truth? Let's debate below! #SoftwareArchitecture #Angular #NestJS #TypeScript #Zod #WebDevelopment #GameDev #CleanCode #frontend #backend
To view or add a comment, sign in
-
-
🚀 NestJS Request Lifecycle — What Really Happens to Every Incoming Request? If you’re building APIs with NestJS, understanding the request lifecycle is critical for writing clean authentication, validation, logging, and error-handling logic. 📥 Incoming Request Flow => 1. Middleware The first layer that executes. Used for logging, modifying request objects, parsing tokens, etc. Runs before guards. => 2. Guards Determine whether the request should proceed. Best place for authentication & authorization logic. If a guard returns false, the request stops here. => 3. Interceptors (Before Handler) Interceptors wrap around the route handler. They execute logic before the handler runs (e.g., logging, caching, performance tracking). => 4. Pipes Pipes handle validation and transformation. This is where DTO validation (class-validator) and transformation (class-transformer) happen. If validation fails → an exception is thrown. => 5. Controller → Route Handler Your actual business logic executes here. Services are called. Database operations run. Data is processed. 📤 Outgoing Response Flow => 6. Interceptors (After Handler) Interceptors can transform or format the response before sending it back to the client. Example: wrapping responses in a standard API format. => 7. Exception Filters (If Error Occurs) If any error is thrown in the lifecycle, exception filters catch it and shape the final error response. 💡 Important Detail Developers Miss: Interceptors are executed twice: • Before the handler (request phase) • After the handler (response phase) This makes them extremely powerful for logging, caching, and response mapping. 🔥 Real-World Example: Request → Middleware logs request → Guard validates JWT → Pipe validates DTO → Controller processes logic → Interceptor formats response → Response sent Understanding this flow makes debugging easier, improves architecture decisions, and prevents mixing responsibilities. If you're serious about scalable backend systems, mastering the request lifecycle is non-negotiable. Official docs: https://lnkd.in/gxfvSqyC Are you using global guards and interceptors in your NestJS apps? #nestjs #nodejs #backenddevelopment #javascript #softwareengineering #api #webdevelopment
To view or add a comment, sign in
-
-
Why your Database and your Frontend shouldn't always speak the same language. 🛠️ I spent today "fighting" with types, and it was a battle worth having. In my current project, I’m moving away from a "flat" data structure to a more sophisticated, nested architecture. Here’s the conflict: 🔹The Database (Supabase/Postgres): Loves being flat. It’s efficient, easy to query, and standard. 🔹The Frontend (Angular + NgRx SignalStore): Loves being nested. Grouping data into objects like status, resources, and military makes the business logic much cleaner and the state easier to manage. The Solution? The "Zod Bridge." 🌉 Instead of doing a massive migration on the database or cluttering my Angular components with mapping logic, I’m using Zod’s .transform() capability in my shared library. The Education Part (Why this is a "Tidy Home" move for code): 1. Decoupling: My frontend is now "Infrastructure Agnostic." If I change my DB column name from gold_count to currency_total, I only change it in one Zod schema. My components never even notice. 2. Type Safety at the Edge: By using Zod to transform the data the moment it leaves the API, I ensure that my SignalStore is always dealing with a "Nested" type that is 100% validated. No more undefined errors mid-game. 3. Clean Developer Experience: Writing state.resources.gold is far more intuitive as the game grows than having 50 flat variables at the top level. Engineering isn't just about making things work; it's about making things maintainable. Sometimes that means doing a bit of "extra" work today to save a hundred hours of refactoring tomorrow. Back to the types! ⌨️🛡️ #Angular #Zod #WebDevelopment #SoftwareArchitecture #NestJS #StateManagement #CodingJourney #Javascript #Frontend
To view or add a comment, sign in
-
𝐌𝐕𝐂 𝐢𝐧 𝐄𝐱𝐩𝐫𝐞𝐬𝐬.𝐣𝐬 Today I finally got clarity on something that confused me for a long time — MVC architecture in backend development. Let me break it down in a way that actually makes sense 👇 𝑾𝒉𝒂𝒕 𝒊𝒔 𝑴𝑽𝑪? MVC stands for: - Model – Handles data (database, structure) - View – What user sees (EJS, HTML) - Controller – The brain 🧠 (logic + connection) 🔥 𝑹𝒆𝒂𝒍 𝑬𝒙𝒂𝒎𝒑𝒍𝒆 (𝑬𝒙𝒑𝒓𝒆𝒔𝒔.𝒋𝒔) When user opens a page: 1️⃣ Route receives request 2️⃣ Route calls the Controller 3️⃣ Controller processes data (Model) 4️⃣ Controller sends data to View (EJS) 5️⃣ View displays it to user 𝑾𝒉𝒚 𝑴𝑽𝑪 𝒎𝒂𝒕𝒕𝒆𝒓𝒔? ✔ Better code structure ✔ Easy debugging ✔ Scalable projects ✔ Industry standard 🔥 𝑲𝒆𝒚 𝑳𝒆𝒂𝒓𝒏𝒊𝒏𝒈 (𝑮𝒂𝒎𝒆 𝒄𝒉𝒂𝒏𝒈𝒆𝒓 𝒇𝒐𝒓 𝒎𝒆) Before: ❌ I was writing everything inside routes ❌ Code was messy & hard to manage Now: ✅ Routes → only handle URL ✅ Controllers → handle logic ✅ Views → handle UI 👉 Everything is clean, scalable, and easy to debug 𝑴𝒚 𝒕𝒂𝒌𝒆𝒂𝒘𝒂𝒚 Don’t mix everything in one place. Separate responsibilities — your future self will thank you. #MVC #ExpressJS #NodeJS #BackendDevelopment #WebDevelopment
To view or add a comment, sign in
-
-
🛑 𝗦𝘁𝗼𝗽 𝘂𝘀𝗶𝗻𝗴 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲𝘀 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗡𝗲𝘀𝘁𝗝𝗦 𝗗𝗧𝗢𝘀! Tonight, while building the backend for my full-stack project, Siege of Eger ⚔️, I hit a classic TypeScript architectural crossroads: Defining my Data Transfer Objects (DTOs) to handle the daily progression game loop. If you come from pure frontend TypeScript, your first instinct is usually to reach for an interface. It’s lightweight and clean, right? Here is why that completely breaks your NestJS API boundary: 👻 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲𝘀 𝗮𝗿𝗲 𝗴𝗵𝗼𝘀𝘁𝘀. When TypeScript compiles down to JavaScript, interfaces are completely stripped away. They simply do not exist at runtime. 🧱 𝗖𝗹𝗮𝘀𝘀𝗲𝘀 𝗮𝗿𝗲 𝗯𝗿𝗶𝗰𝗸𝘀. Classes actually survive the TS -> JS compilation process. They remain tangible objects in the compiled code. 𝗪𝗵𝘆 𝗱𝗼𝗲𝘀 𝗡𝗲𝘀𝘁𝗝𝗦 𝗰𝗮𝗿𝗲? 🧠 NestJS relies heavily on runtime 𝗥𝗲𝗳𝗹𝗲𝗰𝘁𝗶𝗼𝗻 and 𝗠𝗲𝘁𝗮𝗱𝗮𝘁𝗮. When a payload hits your endpoint, NestJS uses your DTO class to figure out exactly what shape the incoming data should be before passing it to your validation pipes. If you use an interface, NestJS looks for the metadata at runtime, finds absolutely nothing, and your validation is completely bypassed! In Siege of Eger, my architectural play is to define a class that implements my shared Zod schemas. This gives me strict compile-time checks in my shared monorepo AND bulletproof runtime validation in my NestJS controllers. 🛡️ 👇 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗡𝗲𝘀𝘁𝗝𝗦 𝘃𝗲𝘁𝗲𝗿𝗮𝗻𝘀: Did you learn this the hard way when you first picked up the framework? How many hours did you spend debugging silent validation failures before realizing your interface was a ghost? Let’s swap war stories in the comments! #NestJS #TypeScript #WebDevelopment #SoftwareEngineering #BackendArchitecture #BuildInPublic #Frontend #Backend
To view or add a comment, sign in
-
Understanding Redux Architecture – How State Management Works in React Redux is a powerful state management library widely used in modern frontend applications, especially with React. It helps maintain a predictable state container for JavaScript applications. Here’s a simple breakdown of the Redux Architecture Flow: 1. React Components (UI Layer) - Components display data and interact with users. - They can: • Dispatch actions to request state changes • Read state from the Redux store 2. Actions - Actions are plain JavaScript objects that describe what happened in the application. - Examples: • ADD_TODO • DELETE_TODO • UPDATE_TODO - Each action contains: • type → describes the action • payload → optional data 3. Dispatch - The dispatch() function sends actions to the Redux Store. - It acts as the trigger for state updates. 4. Middleware - Middleware sits between dispatch and reducers and handles tasks like: ✔ API calls ✔ Logging ✔ Async operations - Common middleware: • Redux Thunk • Redux Saga • Logger 5. Reducers - Reducers are pure functions that determine how the state should change. - They: • Take current state + action • Return a new updated state • Never mutate the existing state 6. Redux Store - The store is the single source of truth. - It holds the entire application state and provides methods like: • getState() • dispatch() • subscribe() 7. State Update & UI Re-render - When reducers return a new state: ➡ The Redux Store updates ➡ React components receive updated state ➡ The UI re-renders automatically 8. Redux DevTools - A powerful debugging tool that allows developers to: • Inspect state • Track actions • Time #Redux #ReactJS #FrontendDevelopment #JavaScript #StateManagement #WebDevelopment #SoftwareEngineering #ReduxToolkit #FrontendArchitecture #Coding
To view or add a comment, sign in
-
-
After weeks of building, I'm excited to share CodeReview — a self-hosted, real-time collaborative code review platform built from scratch. What it does: Developers can submit code, get instant automated analysis, leave line-by-line comments, and receive live notifications — no polling, no page refreshes. The architecture I'm most proud of: - The backend runs on 5 independent Go microservices — User, Review, Analysis, Notification, and an API Gateway — all communicating asynchronously via RabbitMQ. - When a review is submitted, an event cascades through the pipeline: triggering static analysis (10+ rules including hardcoded secret detection), persisting results, and broadcasting a live update over WebSocket. - Service-to-service communication happens over gRPC, with the API Gateway as the single HTTP/WebSocket entry point. - The frontend is a Vue.js 3 + TypeScript SPA with Pinia, Tailwind CSS v4, and auto-reconnecting WebSocket support. A few deliberate constraints I imposed on myself: No ORM — raw SQL throughout No generated gRPC stubs — manual Protobuf definitions Stateless JWT auth across all services Full Docker Compose setup for MySQL + RabbitMQ This was a genuine deep-dive into distributed systems design, real-time communication, and Go microservices patterns. Building something end-to-end — from auth to event-driven pipelines to a live UI — taught me more than any tutorial could. Check it on: https://lnkd.in/g3SHmfQB #GoLang #Microservices #VueJS #RabbitMQ #gRPC #WebSocket #DistributedSystems #BackendDevelopment
To view or add a comment, sign in
-
-
⚡ Optimizing React Performance with "useMemo" In React, every component re-render triggers the execution of all calculations inside the component. While this works fine for simple logic, it can become inefficient when dealing with expensive computations, large datasets, or complex filtering operations. The "useMemo" hook helps solve this by memoizing the result of a computation, ensuring that the calculation only runs when its dependencies change. Without "useMemo", expensive operations execute on every render, leading to unnecessary CPU work and slower UI updates. With "useMemo", React caches the computed value and reuses it until the dependent data changes. This optimization is especially useful for: • Filtering or sorting large datasets • Heavy data transformations • Performance-sensitive components Used correctly, "useMemo" can significantly reduce redundant calculations and improve rendering performance, helping build more efficient and scalable React applications. 📚 Reference: Concepts inspired by tutorials from 👉 Sheryians Coding School #React #ReactJS #JavaScript #FrontendDevelopment #WebPerformance
To view or add a comment, sign in
-
-
𝐀𝐫𝐞 𝐲𝐨𝐮 𝐬𝐭𝐢𝐥𝐥 𝐥𝐞𝐭𝐭𝐢𝐧𝐠 𝐓𝐲𝐩𝐞𝐒𝐜𝐫𝐢𝐩𝐭 𝐝𝐞𝐟𝐚𝐮𝐥𝐭 𝐭𝐨 `any` 𝐰𝐡𝐞𝐧 𝐚𝐜𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐨𝐛𝐣𝐞𝐜𝐭 𝐩𝐫𝐨𝐩𝐞𝐫𝐭𝐢𝐞𝐬 𝐝𝐲𝐧𝐚𝐦𝐢𝐜𝐚𝐥𝐥𝐲? 𝐓𝐡𝐞𝐫𝐞'𝐬 𝐚 𝐛𝐞𝐭𝐭𝐞𝐫 𝐰𝐚𝐲 𝐭𝐨 𝐬𝐭𝐚𝐲 𝐭𝐲𝐩𝐞-𝐬𝐚𝐟𝐞. One common challenge in TS is creating generic functions that can access properties of an object without sacrificing compile-time type safety. Many resort to `any` or complex overloads, losing the benefits of TypeScript. The trick is combining Generics, `keyof`, and `extends` to tell the compiler exactly what to expect. Here’s a simple pattern for a type-safe `getProperty` function: ```typescript function getProperty<T, K extends keyof T>(obj: T, key: K): T[K] { return obj[key]; } interface User { id: number; name: string; email: string; } const user: User = { id: 1, name: 'Alice', email: 'alice@example.com' }; const userName = getProperty(user, 'name'); // Type of userName is 'string' - inferred correctly! const userId = getProperty(user, 'id'); // Type of userId is 'number' - perfect! // getProperty(user, 'address'); // Compiler error! 'address' is not assignable to type 'keyof User'. This is exactly what we want. ``` This pattern ensures that `key` is always a valid property of `T`, and the return type is correctly inferred as `T[K]`. No more runtime surprises or `any` casts! It's clean, reusable, and powerfully type-safe. What's your go-to TypeScript trick for maintaining type safety with dynamic data? Share in the comments! #TypeScript #FrontendDevelopment #SoftwareEngineering #React #WebDev
To view or add a comment, sign in
-
This TypeScript pattern eliminates an entire class of runtime errors. One TypeScript pattern that changed how I write every API response and async state: Discriminated unions. Most developers write this: ❌ Common mistake type ApiResponse = { status: string data?: unknown error?: string loading?: boolean } This allows impossible states. Response with both data AND error? TypeScript allows it. Response with no status at all? TypeScript allows it. Runtime surprise: guaranteed. ✅ Do this instead type ApiResponse = | { status: 'idle' } | { status: 'loading' } | { status: 'success'; data: T } | { status: 'error'; error: string } Now, when you write: if (response.status === 'success') { // TypeScript KNOWS data exists console.log(response.data) } The compiler catches every unhandled case. No runtime surprises. No optional chaining everywhere. I use this pattern for: → Every API response shape (Node.js backend to React frontend) → Every LLM output type → Every async state in React components → Every form state machine Works perfectly end-to-end across your full TypeScript stack — same type shape from your Node.js service to your React component. Save this post. You will use it this week. Which TypeScript pattern do you find most underused? Best answer gets covered next Sunday. #TypeScript #WebDevelopment #React #NodeJS #CleanCode #FullStackDevelopment #SoftwareEngineering #JavaScript #ProgrammingTips #DeveloperTips
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development