𝐏𝐚𝐭𝐭𝐞𝐫𝐧𝐬 𝐨𝐟 𝐅𝐚𝐬𝐭 𝐂𝐨𝐝𝐞 : 𝐁𝐞𝐲𝐨𝐧𝐝 𝐉𝐮𝐬𝐭 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐂𝐨𝐝𝐞 𝐓𝐡𝐚𝐭 𝐖𝐨𝐫𝐤𝐬 As an architect, I have learned that everyone can write code that runs but not everyone writes code that runs fast. Over time, I have gained a few simple patterns that make all the difference across any tech stack. Here are the some of the patterns of fast code. 𝐅𝐚𝐬𝐭 𝐜𝐨𝐝𝐞 𝐥𝐨𝐯𝐞𝐬 𝐦𝐞𝐦𝐨𝐫𝐲 𝐩𝐫𝐨𝐱𝐢𝐦𝐢𝐭𝐲 Design your data structures so the CPU can read them in a single cache line. 𝙀𝙭𝙖𝙢𝙥𝙡𝙚: 🟡 Prefer arrays or spans over scattered linked lists. 𝐊𝐞𝐞𝐩 𝐋𝐨𝐨𝐩𝐬 𝐋𝐢𝐠𝐡𝐭𝐰𝐞𝐢𝐠𝐡𝐭 A tight loop should do only essential activities. Move checks, conversions, allocations, or anything constant outside the loop. 🟡 Even tiny operations inside a loop can slow your code removing them gives the biggest speed boost. 𝐂𝐨𝐦𝐩𝐮𝐭𝐞 𝐎𝐧𝐜𝐞, 𝐔𝐬𝐞 𝐌𝐚𝐧𝐲 𝐓𝐢𝐦𝐞𝐬 Cache expensive results for unchanged inputs. 🟡Precompiled regular expressions instead of creating a new Regex every time. 🟡Using HashSet or Dictionary for repeated lookups instead of scanning a list. 𝐂𝐡𝐨𝐨𝐬𝐞 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝐚𝐭 𝐭𝐡𝐞 𝐬𝐭𝐚𝐫𝐭 𝐝𝐨𝐧’𝐭 𝐨𝐯𝐞𝐫𝐜𝐨𝐦𝐩𝐥𝐢𝐜𝐚𝐭𝐞 𝐥𝐚𝐭𝐞𝐫. The performance of your code depends more on choosing the right algorithm/data structure upfront than on tiny optimizations later. 🟡 A poor choice early (like a List where a HashSet would fit) forces extra work later, no compiler trick can fully fix it. 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐞 𝐁𝐚𝐬𝐞𝐝 𝐨𝐧 𝐅𝐚𝐜𝐭𝐬 🟡Use profilers, Stopwatch, or BenchmarkDotNet to measure hotspots. 🟡Don’t guess what’s slow measure it and optimize. 🟡Every change should be data-backed to ensure it improves performance. 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧 Finally, fast code isn’t about typing quickly it is about planning carefully first and then start. 𝐈 𝐛𝐞𝐥𝐢𝐞𝐯𝐞 𝐲𝐨𝐮’𝐯𝐞 𝐠𝐨𝐭 𝐭𝐡𝐞 𝐞𝐬𝐬𝐞𝐧𝐜𝐞 𝐨𝐟 𝐢𝐭. #CleanCode #CodeOptimization #PerformanceEngineering #SoftwarePerformance #HighPerformanceComputing #EfficientCode #FastCode
How to Write Fast Code: Patterns for Patience
More Relevant Posts
-
This article walks through 7 vectorization techniques that eliminate loops from numerical code. Each one addresses a specific pattern where developers typically reach for iteration, showing you how to reformulate the problem in array operations instead. The result is code that runs much (much) faster and often reads more clearly than the loop-based version. https://lnkd.in/dz7Mc-BQ
To view or add a comment, sign in
-
🚀 𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻 & 𝗟𝗮𝗻𝗴𝗚𝗿𝗮𝗽𝗵 𝘃𝟭.𝟬: 𝗖𝗼𝗿𝗲 𝗟𝗼𝗼𝗽 𝗥𝗲𝗳𝗮𝗰𝘁𝗼𝗿𝗲𝗱 𝗳𝗼𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗚𝗿𝗮𝗱𝗲 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 Developers, the wait is over—LangChain v1.0 streamlines agent orchestration with 𝐜𝐫𝐞𝐚𝐭𝐞_𝐚𝐠𝐞𝐧𝐭(): a provider-agnostic primitive that spins up ReAct-style loops (LLM → tools → structured output) in under 10 lines. Key upgrades: * 𝐌𝐢𝐝𝐝𝐥𝐞𝐰𝐚𝐫𝐞 𝐇𝐨𝐨𝐤𝐬: Intercept & customize every step—inject HITL pauses, auto-summarize traces to dodge context bloat, or redact PII pre-tool call. Extensible via Runnable interfaces for your custom logic. * 𝐍𝐚𝐭𝐢𝐯𝐞 𝐒𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞𝐝 𝐎𝐮𝐭𝐩𝐮𝐭𝐬: Bakes in tool-calling + provider fallbacks (e.g., OpenAI functions, Anthropic tools), cutting extra LLM invocations by 50%+ and slashing latency. Unified Content Blocks: Messages now expose .content_blocks for consistent handling of traces, citations, and server-side execution across 100+ integrations. LangGraph v1.0 elevates this to graph-native execution: durable Checkpoints for fault-tolerant state (resume post-crash, no custom DB glue), composable subgraphs for multi-agent orchestration, and first-class async persistence via SQLite/Postgres backends. Fueling this? Our $125M Series B at $1.25B valuation, doubling down on LangSmith for end-to-end observability and agent debugging. Production agents just got bulletproof. Prototype a multi-tool RAG chain today. Details: [https://lnkd.in/giW_KDm4] What's your first v1.0 experiment—persistent multi-agent sims or middleware-extended evals? #LangChain #LangGraph #AgenticAI #LLMs #Python
To view or add a comment, sign in
-
🧪 Pytest Architecture That Actually Scales Refactored 500+ lines of tests this week. Here’s what separates messy from maintainable: 1) One database session fixture Stop creating sessions everywhere. Put a single fixture in your root conftest.py and reuse it. Per-file sessions = debugging nightmare. 2) ORMs cache everything Your ORM remembers old data even after APIs update it. Force a refresh after external changes or you’ll chase phantom bugs for hours. 3) External services ≠ your database That “test_user” you created? It still exists in Stripe/Auth0/etc even after your DB resets. Use random IDs or your second run will fail. 4) Build a fixture hierarchy - Root: infrastructure (database, HTTP client) - Feature: domain setup (users, products) - Test: just assertions No global variables. No module state. Clean layers. 5) Extract IDs before refresh If you’ll expire/refresh the session, grab any IDs first. Accessing an expired object = crash. 6) Small fixtures > god fixtures Don’t make a “setup_everything” fixture. Make small, composable fixtures. Your future teammates will actually understand what’s happening. The pattern: Treat test architecture like production code. Clear dependencies. Single responsibility. No shared state. Most test pain isn’t from pytest. It’s from not having an architecture. #python #testing #pytest #cleancode #tdd
To view or add a comment, sign in
-
🧱 Day #4: Stack — The Power of “Last In, First Out” (LIFO) It seemed like a simple list. But stacks are everywhere in real-world systems! ⚙️ 🔍 What is a Stack? A Stack is a linear data structure that follows the LIFO (Last In, First Out) principle. That means the last element inserted is the first one to be removed. 🧩 Think of it like a stack of plates — you add new plates on top and remove from the top only. ⚙️ Basic Operations Operation Description push() Add an element to the top pop() Remove the top element peek() / top() View the top element without removing it isEmpty() Check if stack is empty 💡 Real-Time Use Cases of Stack Stacks are not just for coding exercises — they’re everywhere in real life and tech systems 👇 🔹 Undo / Redo Functionality In text editors or Photoshop, every action is pushed to a stack. Undo means popping the last action. 🔹 Browser History When you visit a new page, it’s pushed to the stack. Clicking “back” pops the last page. 🔹 Parentheses / Expression Validation Used in compilers and syntax checkers to ensure parentheses, brackets, or tags are balanced. 🔹 Function Calls & Recursion The call stack keeps track of which function is currently executing. 🔹 Backtracking Algorithms Used in solving mazes, Sudoku, or navigating decision trees — push paths, backtrack when needed. ⚡ Why Stack Matters Simplifies recursion & expression evaluation Foundation for parsing, backtracking, and memory management Real-world systems like browsers and compilers depend on it 💬 Have you ever built a feature that secretly used a stack under the hood? Drop your example below 👇 #DSA #Stack #ProblemSolving #Algorithms #Coding #LearningJourney
To view or add a comment, sign in
-
Day 55 of the #90DaysWithDSA challenge is complete! Today was about converting a binary tree into a storable format and reconstructing it perfectly - a fundamental skill for distributed systems and data persistence. Today's Problem: Serialize and Deserialize Binary Tree (LeetCode 297 - Hard) The challenge: Design an algorithm to serialize and deserialize a binary tree. Serialization is converting a tree to a string, and deserialization is reconstructing the exact same tree from that string. This problem uses pre-order traversal with a clever approach to handle null nodes: The Approach: Serialization: Use pre-order traversal (root, left, right), representing null nodes with a special marker (like "X"). This creates a unique string representation. Deserialization: Split the string and use the same pre-order logic to rebuild the tree. Consume nodes from the list, creating nodes for values and handling null markers appropriately. This approach runs in O(n) time for both operations and handles any binary tree structure. Key Takeaway: Tree serialization is crucial for storing tree structures in databases, sending them over networks, or caching. The pre-order traversal with null markers ensures the tree structure is preserved unambiguously. This pattern is used in real-world applications like storing parse trees, game states, and configuration trees. 55 days of consistent coding. Understanding how data structures translate to persistent storage is incredibly valuable! 💡 Let's keep building practical CS skills together! 👉 Want to master data structure serialization and real-world applications? JOIN THE JOURNEY! Comment "Serializing with you!" below and share what practical CS problem you're working on. 👉 Repost this ♻️ to help other developers discover this challenge and learn about data persistence techniques. Where have you encountered serialization in your projects or studies? #Day55 #BinaryTree #Serialization #Deserialization #DataPersistence #CodingInterview #Programming #SoftwareEngineering #Tech #LearnInPublic #Developer #LeetCode #ProblemSolving
To view or add a comment, sign in
-
-
3478. Choose K Elements With Maximum Sum Day 120 📌 Problem Statement: You are given two integer arrays, nums1 and nums2 (length n) and a positive integer k. For each index i (0 ≤ i < n): 1️⃣ Find all indices j where nums1[j] < nums1[i]. 2️⃣ Choose at most k values of nums2[j] at these indices to maximize the total sum. Return an array answer of size n, where answer[i] is the result for index i. 🛠 Approach: 1️⃣ Combine values & indices: vec[i] = {nums1[i], i, nums2[i]} 2️⃣ Sort by nums1[i] → ensures we can efficiently track smaller elements. 3️⃣ Min Heap (priority_queue) → keeps track of top k largest nums2 values so far. 4️⃣ Iterate & update results: If previous element has same nums1 value, copy previous result. Otherwise, use the sum of top k elements in heap. 5️⃣ Maintain heap size ≤ k: Pop smallest element if size exceeds k. ⏱ Time Complexity: Sorting → O(n log n) Iteration & heap ops → O(n log k) ✅ Overall: O(n log n + n log k) 💾 Space Complexity: Heap → O(k) Result array → O(n) ⚡ Key Points: Priority queue is perfect for “top k” problems. Sorting simplifies constraints: nums1[j] < nums1[i]. Copy results for duplicate nums1[i] values → avoids recomputation. 📌 Takeaways: ✅ Priority queues are perfect for top k selection problems. ✅ Sorting simplifies dependency constraints. ✅ Copying results for duplicates avoids redundant computation. #LeetCode #CPP #DataStructures #Algorithms #PriorityQueue #Heap #Coding #ProblemSolving #CompetitiveProgramming #DSA
To view or add a comment, sign in
-
-
I’ve been consistently practicing Data Structures & Algorithms, focusing on understanding the underlying logic and core patterns behind each problem rather than just solving them. Here’s a summary of some recent problems I’ve tackled, along with the key concepts learned 📌 225. Implement Stack using Queues Key Concept: Queue-based simulation of stack operations (using two queues or a single optimized queue) https://lnkd.in/gabQrA_R 📌232. Implement Queue using Stacks Key Concept: Stack-based implementation using two stacks — one for enqueue, one for dequeue operations https://lnkd.in/gwq44Akm 📌102. Binary Tree Level Order Traversal Key Concept: Breadth-First Search (BFS) using a queue for level-wise traversal of binary trees https://lnkd.in/gn4ejwNK 📌239. Sliding Window Maximum Key Concept: Deque-based sliding window to efficiently track the maximum element in each window https://lnkd.in/gwDAFZkP 📌435. Non-overlapping Intervals Key Concept: Sorting by end time + greedy interval selection to minimize overlaps https://lnkd.in/gVwP-2pf 📌1710. Maximum Units on a Truck Key Concept: Greedy approach inspired by the knapsack problem — maximize total units by sorting on value https://lnkd.in/gqxdifBu 📌646. Maximum Length of Pair Chain Key Concept: Similar to non-overlapping intervals — greedy selection based on the smallest end time https://lnkd.in/gik6pJ6K These problems helped me strengthen concepts like queue–stack interconversion, BFS traversal, sliding window optimization, greedy algorithms, and interval scheduling techniques. I’d highly recommend trying these problems out — they’re great for building pattern recognition and problem-solving intuition. Here’s my LeetCode Profile for reference: https://lnkd.in/gp38YMN7 #DSA #LeetCode #Java #Algorithms #ProblemSolving #CodingInterview #SoftwareDevelopment #SDE #TechJourney #DailyCoding
To view or add a comment, sign in
-
-
Ask HN: What are Your Strategies for Managing MCP Servers with Multiple AI Agents? https://lnkd.in/g-7iy3ZU Navigating the Challenges of AI Agent Development I've been actively building AI agents using LangChain, n8n, and custom Python scripts. In the process, I've encountered several challenges that many of you may relate to: MCP Server Configuration: Do you set up separate MCP servers for each agent, or do you share them across projects? Credentials and Config Management: What's your strategy for managing credentials and configurations across different frameworks? Dependency Conflicts: Have you faced issues where different agents require incompatible Python versions? Observability Needs: How do you keep track of which agent is utilizing a specific tool? To streamline this, I’ve started building a private registry to centralize configurations. However, I wonder if I’m overengineering or if these pain points resonate with you. Let’s connect! How are you addressing these challenges? Please share your insights below! Source link https://lnkd.in/g-7iy3ZU
To view or add a comment, sign in
-
-
🚀 Two Pointer Technique — The Hidden Gem of Optimized Problem Solving The Two Pointer technique is one of those elegant patterns that transforms nested loops into clean O(n) solutions. It’s all about using two indices that move through your data — sometimes from opposite ends, sometimes together — to efficiently compare, partition, or traverse arrays and strings. Whether it’s finding pairs with a target sum, removing duplicates in-place, or checking for palindromes, the principle stays the same: 👉 Use movement, not brute force. Instead of restarting the search every time, the Two Pointer method lets you reuse previously processed data, dramatically reducing unnecessary computations. From array problems to linked lists, and even complex challenges like “Container With Most Water” or “3Sum,” mastering this pattern unlocks a new level of clarity and performance in your problem-solving approach. Follow Codekerdos for more algorithm deep-dives, clean code patterns, and practical insights that sharpen your developer mindset 🔥 #Algorithms #TwoPointer #ProgrammingTips #DeveloperMindset #Codekerdos #CleanCode #SoftwareEngineering #100DaysOfCode #google
To view or add a comment, sign in
-
💡 The Trick That Turns O(n × m) into O(n + m) You know that feeling when your code works… but not fast enough? 😅 You optimize loops, change variables, even pray to the compiler gods - yet it still runs in O(n × m). Here’s the simple shift that changes everything 👇 ⚙️ The Hidden Power of “Marking Boundaries” Instead of updating every element in a range - just mark where the change starts and where it ends. That’s the essence of the Difference Array + Prefix Sum technique. It flips the problem from “update everything” → “record changes only.” 🧠 How It Works Let’s say you want to add +v to range [L, R]: 1️⃣ Mark boundaries arr[L] += v arr[R + 1] -= v 2️⃣ Propagate once Take the prefix sum → spreads all changes automatically. ✅ Each update = O(1) ✅ Final array reconstruction = O(n) ✅ Total = O(m + n) From looping m times through n elements → to one elegant linear pass. ✨ 💥 Where It Shines (Real LeetCode Power-Ups) 🔥 LeetCode 1109 - Corporate Flight Bookings Each booking adds seats in a flight range → mark start and end only. Prefix sums handle everything else. ➡️ Naive: O(bookings × n) ➡️ Optimized: O(bookings + n) 🎯 LeetCode 1893 - Check if All Integers in a Range Are Covered Mark each coverage start & end → then prefix scan to check coverage status. No need for nested loops. https://lnkd.in/g2tti-Ar 🚀 LeetCode 2381 - Shifting Letters II Each shift affects substring range → use difference array on ASCII deltas. No repeated shifting - just apply prefix once. https://lnkd.in/g26YsuqT 💪 LeetCode 798 - Smallest Rotation with Highest Score The trick behind rotation scoring also uses this same idea - marking where gains begin and losses stop. This pattern repeats across top-tier DSA problems - once you master it, you’ll start seeing boundaries instead of loops. https://lnkd.in/ghX_Epjy 💡 The Big Lesson Optimization isn’t just about reducing code lines. It’s about rethinking how we propagate change. Once you switch from value updates → to boundary logic, you unlock a different level of algorithmic clarity. ⚙️ 💬 Have you ever used this trick in a project or interview problem? Drop your favorite “O(n²) → O(n)” story below 👇 LeetCode Microsoft Oracle #DSA #Algorithms #SystemDesign #TimeComplexity #LeetCode #CodingInterview #ProblemSolving #Optimization
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development