Logic + Building: Day 13/180!🚀 Today was about more than just writing code—it was about how the computer executes it. I started with Functions and the fascinating Function Call Stack. The Deep Dive: Understanding Stack Frames Every function call creates a 'Stack Frame' in memory—a dedicated workspace containing: 1️⃣ Local Variables: The workspace's own tools (Variables declared inside). 2️⃣ Function Parameters: The instructions/data passed to the workspace. 3️⃣ Return Address: The "GPS" that tells the CPU exactly where to go back once the function finishes its job. Why this matters? Understanding the Call Stack is the secret to mastering Recursion and debugging complex logic. If you know how the stack grows and shrinks, you control the program! Status: ✅ Function Basics: Done. ✅ Stack Memory Visualization: Clear. 🚀 Ready to build modular and optimized code. (Resource: #CodeHelp — Love Babbar Bhaiya) #180DaysChallenge #100DaysOfCode #CProgramming #Functions #CallStack #SoftwareEngineering #LogicBuilding #DSA #BuildInPublic #CodeHelp #LoveBabbar #ComputerScience #MemoryManagement
Mastering Functions and Call Stack in Code
More Relevant Posts
-
Day 17. Sorting Logic & System Internals. Today’s progress: ✅ DSA — Implemented Selection Sort and Insertion Sort. Building functions to return sorted arrays from scratch is the best way to understand how different algorithms handle data movement and time complexity. CODE : https://lnkd.in/dTMCJdkq ✅ Operating Systems — Deep dive into Processes vs. Threads. Understanding execution units and memory sharing is a game-changer for writing efficient code. ✅ System Clean-up — Learned about Zombie and Orphan processes. It’s fascinating (and vital) to see how the OS manages "lost" processes and keeps the system from leaking resources. 17 days down. The fundamentals are stacking up. See you at Day 18. 🚀 #DSA #100DaysOfCode #BuildInPublic #OperatingSystems #ComputerScience #WebDev #DevJourney #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Cracking Binary Tree Problems with Optimal Efficiency! I’m excited to share that I successfully solved the “Binary Tree Right Side View” problem on LeetCode using an optimized approach, achieving 0 ms runtime (beating 100% of C++ submissions) ⚡ 🔍 Problem Statement Given a binary tree, the task is to return the values of nodes visible when the tree is viewed from the right side, listed from top to bottom. 💡 My Approach Instead of using the commonly preferred BFS (Level Order Traversal), I implemented a Recursive DFS approach with optimization: ✔ Key Strategy: Right-first traversal (Root → Right → Left) ensures that rightmost nodes are prioritized. Level tracking technique is used to capture only the first node encountered at each depth. At each level, if it is the first visit, that node is added to the result vector. ⚙️ Complexity Analysis Time Complexity: O(N), where N is the number of nodes Space Complexity: O(H), where H is the height of the tree (recursion stack) 📊 Performance Results ⏱ Runtime: 0 ms (Beats 100% of C++ submissions) 💾 Memory: 14.86 MB (Beats 85.22% of submissions) 🧠 Key Takeaway Sometimes, a simple shift in traversal order combined with smart level tracking can replace complex logic and still deliver optimal performance. Consistency in solving DSA problems continues to sharpen my problem-solving intuition and coding efficiency. #LeetCode #DSA #Cplusplus #BinaryTree #Algorithms #ProblemSolving #CodingJourney #TechCommunity #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Day 26 of 100 Days LeetCode Challenge Problem: Equal Sum Grid Partition II Today’s problem is an advanced version of Day 25 with an extra twist 🔥 👉 Now we can remove (discount) at most one cell to balance the partition. 💡 Key Insight: We need: A horizontal OR vertical cut Two non-empty parts Equal sum OR can be made equal by removing one cell 🔍 Core Approach: 1️⃣ Total Sum Check Let total = sum of all elements 2️⃣ Try All Possible Cuts 👉 For each horizontal & vertical cut: Compute: Left/Top sum Right/Bottom sum 3️⃣ Check Two Cases: ✅ Case 1: Equal directly If both sums equal → return true ✅ Case 2: Difference can be fixed Let diff = |sum1 - sum2| Check if there exists a cell with value = diff In the larger partition 👉 Removing that one cell balances both sides ⚠️ Important Constraint: After removing a cell → the section must still be connected So avoid removing a “critical” cell that breaks connectivity 🔥 What I Learned Today: Small constraint changes → big increase in complexity Problem-solving requires checking multiple scenarios Combining prefix sums + validation logic is powerful 📈 Challenge Progress: Day 26/100 ✅ Beyond basics now! LeetCode, Prefix Sum, Matrix, Partition Problem, Graph Connectivity, Optimization, DSA Practice, Coding Challenge, Problem Solving #100DaysOfCode #LeetCode #DSA #CodingChallenge #PrefixSum #Matrix #ProblemSolving #TechJourney #ProgrammerLife #SoftwareDeveloper #CodingLife #LearnToCode #Developers #Consistency #GrowthMindset #InterviewPrep
To view or add a comment, sign in
-
-
Day 183/365 – DSA Challenge 🔺 Solved Triangle (Minimum Path Sum) on LeetCode today. 🔹 Problem: Given a triangle array, find the minimum path sum from top to bottom. At each step, you may move to adjacent numbers in the row below. 🔹 Approach Used: Bottom-Up Dynamic Programming 💡 Key Idea: Start from the last row and move upward, storing the minimum path to reach bottom. Steps: 1️⃣ Initialize DP with last row values 2️⃣ Move upward row by row 3️⃣ For each element: 👉 dp[i][j] = triangle[i][j] + min(dp[i+1][j], dp[i+1][j+1]) 4️⃣ Final answer → dp[0][0] 🔹 Why this works? Instead of exploring all paths (exponential), we reuse computed results → efficient. 🔹 Time Complexity: O(n²) 🔹 Space Complexity: O(n) (can optimize to 1D DP) 🔹 Example: Input: [[2],[3,4],[6,5,7],[4,1,8,3]] Output: 11 🔹 Concepts Used: Dynamic Programming, Bottom-Up Approach, Optimization 🔥 Pattern Recognized: Classic DP grid/triangle problem — similar to minimum path sum in matrix 💻 Language: C++ DP problems getting sharper day by day 🚀 #Day183 #365DaysOfCode #DSA #LeetCode #DynamicProgramming #Cpp #CodingJourney
To view or add a comment, sign in
-
-
💻 Understanding Memory Allocation in C Today, I explored a fundamental concept in C_programming Memory Allocation and how it directly impacts performance and efficiency. # What is Memory Allocation? Memory allocation refers to the process of reserving space in RAM for storing data during program execution. ## In C, there are two primary types: 1. Static Memory Allocation Allocated at compile time. The size is fixed and cannot be changed later. Example: int a = 10; 2. Dynamic Memory Allocation Allocated at runtime, allowing flexible memory usage based on program needs. To use dynamic allocation, we include: #include <stdlib.h> * Key Functions: • malloc() – Allocates a single block of memory int *ptr = (int*) malloc(5 * sizeof(int)); • calloc() – Allocates multiple blocks and initializes them to zero int *ptr = (int*) calloc(5, sizeof(int)); • realloc() – Resizes previously allocated memory ptr = (int*) realloc(ptr, 10 * sizeof(int)); • free() – Releases allocated memory back to the system free(ptr); Best Practices: ✔ Always free dynamically allocated memory to avoid memory leaks ✔ Use pointers carefully when working with dynamic memory ✔ Choose dynamic allocation when flexibility is required Static allocation is simple but limited, while dynamic allocation offers flexibility and efficient memory usage when handled correctly. #Programming #CProgramming #MemoryManagement #LearningJourney
To view or add a comment, sign in
-
🚀 Happy to share part-5 of the Runpod Serverless inference series! In this part, I walk through how to connect your client application and WebUI to a deployed model using RunPod. 🔹 What’s covered: • Client-side inference using Python • Secure API integration • Handling cold starts & optimizing latency • Connecting a Gradio WebUI for live interaction • Testing the complete inference pipeline 📝 Read the full blog here: https://lnkd.in/gGMp8Dyd 🎥 Watch the video here: https://lnkd.in/gWiv9mB2 #AI #MachineLearning #MLOps #Serverless #RunPod #GenerativeAI #Python #DevOps
Client integration and optimized inference - part 5
https://www.youtube.com/
To view or add a comment, sign in
-
I set out to master key Array patterns today, focusing on Two Sum, Finding Min/Max, and Reversing an Array. While Two Sum and Min/Max went smoothly, I hit a significant hurdle with the logic for reversing an array efficiently. I found myself stuck in a "logic loop," struggling to optimize the swap mechanism and index handling. Action: Instead of giving up, I spent a significant amount of time trying to dry-run my own logic. When I reached a plateau, I leveraged an LLM—not to get the answer, but to identify the specific flaw in my approach. I asked: "Where is my logic failing, and what is the standard approach?" This helped me bridge the gap between my current thinking and the optimal two-pointer technique. By identifying my mistake, I successfully implemented the solution and, more importantly, solidified my understanding of In-place algorithms and Time Complexity. I finished the session with three solid problems solved and a sharper debugging mindset. Don't be afraid to struggle with logic. It’s not about how fast you solve it, but how well you understand why the solution works. 💻✨ #JavaDeveloper #DataStructures #LearningInPublic #ProblemSolving #CodingJourney #SoftwareEngineering #JavaFullStack
To view or add a comment, sign in
-
Day 71 - Decode Stage (RISC-V) With the fetch stage in place and the control unit already defined, this step is where things actually start getting interpreted. The decode stage takes the 32-bit instruction from memory and breaks it into meaningful fields: opcode, rd, rs1, rs2, funct3, funct7, and, at the same time, prepares the immediate values required for execution. In the implementation, this is not treated as a separate abstract block. The instruction bits are directly sliced and registered on the clock, so the datapath gets stable signals for the next stage. Alongside this, the control unit is instantiated here, which means decoding and control signal generation are happening together in a synchronized manner. Another important part is how immediates are formed. Instead of handling them later, the decode stage already constructs them (like I-type and B-type), so the execution stage doesn’t need to worry about instruction formats anymore; it simply consumes ready-to-use values. If you look at the schematic, it reflects the same idea: instruction comes in, gets split, registered, and then fed into both control logic and datapath signals. This is where raw instruction bits start turning into actionable hardware signals. Next, we’ll move further into how these decoded values actually drive execution through the remaining stages and modules.
To view or add a comment, sign in
-
-
Logic + Building: Day 14/180!🚀 Nothing beats the combination of a #hotcup of tea and some deep logic building. Today was all about mastering how data flows within a program. The Showdown: Pass by Value vs. Pass by Reference I spent the day understanding why sometimes a variable's value changes globally and sometimes it doesn't. 🔹 Pass by Value: Creating a copy. The original stays safe. 🔹 Pass by Reference: Sharing the actual address. Changes reflect everywhere. Practice Session: Beyond theory, I solved 4-5 problems involving find Max of three numbers and swapping values using both methods. Seeing the memory addresses (&a) change (or stay the same) helped me visualize the Call Stack perfectly. Status: ✅ Pass by Value/Reference: Concept Solid. ✅ Logic Practice: 5 Problems Solved. 🚀 Every day, the code feels a bit more natural. (Resource: #CodeHelp — Love Babbar Bhaiya) #180DaysChallenge #100DaysOfCode #CProgramming #MemoryManagement #LogicBuilding #BuildInPublic #SoftwareEngineering #CodeAndCoffee
To view or add a comment, sign in
-
-
Big numbers on specification sheets don't always mean better tools for your specific problem or use case. Sometimes the smallest model solves your task fastest and cheapest without unnecessary complexity. Recent head-to-head tests show that for common coding tasks, mid-size open models match larger ones while using half the compute resources, saving both time and money for developers building real applications. tell me where i am wrong
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development