Hello, world!! During my recent project (php-json library), I had frequent scenarios where I had to choose between using regular loops or recursion to resolve each properties of a class and any embedded object properties. Ideally, the right way is to use the recursion as it is best suited for nested properties while loops can only go over a layer. So I took this option and utilised PHPs Reflection class to get the properties and recursively resolve them properly almost the same way a DI Container would. In another scenario, the loops will be the better choice as it has less space efficiency and doesn't stand a risk off stack overflow, perfect for huge sized data. In a nutshell, using recursion (especially with a logic to prevent infinite recursion) provides a logical and elegant code but isn't suited for working with large data, while loops work with large but simple one-layered data sets. Loops can also work with multiple layers, but will however, introduce badly nested for-loops which will eventually lead to code smells and bugs. Thank you for your time and have a productive week!! #Programming #CleanCode #SoftwareEngineering #DataStructures #Recursion
PHP Recursion vs Loops for Nested Properties
More Relevant Posts
-
Most RAG tutorials oversimplify the problem. They make it look like: LLM + Vector DB = Done. ❌ In reality, the real engineering challenge is retrieval architecture. When I tried scaling a RAG system to 10,000+ documents, I ran into problems I didn’t expect: ▶ Slow retrieval ▶ Large context windows ▶ High token costs ▶ Context retention issues So I built a pipeline using LlamaIndex + FAISS and optimized the retrieval layer. Result: ✅ <100ms vector search ✅ 10K+ documents indexed ✅ Lower token usage ✅ More grounded responses I documented the full architecture and workflow in this PDF. Curious to hear from engineers building production RAG systems: What’s been the hardest bottleneck in your pipeline? 1. Retrieval latency 2. Chunking strategy 3. Vector database scaling ...or something else? #GenAI #RAG #LlamaIndex #LangChain #BackendEngineering #SystemDesign #MachineLearning #OpenSource #VectorDatabase #Python
To view or add a comment, sign in
-
🚀 Segment Tree Explained in Detail Segment Tree is one of the most powerful range query data structures in competitive programming. If you want to master range query problems, understanding its internal working is very important. In this video we cover: • What kind of problems it solves • How it reduces complexity to O(log n) • Basic build and query operations Next, we’ll solve some good rated Codeforces problems using this concept. Full video link: https://lnkd.in/g9diemFn 📌 Save for revision & follow for more DSA / CP content. #datastructures #algorithms #competitiveprogramming #codeforces #leetcode #coding
To view or add a comment, sign in
-
🚀Have you ever noticed this while coding? 🤔 Sometimes a HashMap feels faster for small inputs, but as the data grows, suddenly arrays start performing better. This used to confuse me a lot. Here’s the simple way I understood it Imagine searching for a book..... 📚 From Small bookshelf (10–20 books): You almost remember where each book is. Whether you scan quickly or ask someone to look it up, it feels instant. 🏬 From a huge warehouse (1 million books): Now, imagine books are scattered randomly across different rooms. Every lookup means jumping from one room to another. That “jumping” takes time. But if all books are arranged neatly in a straight line (like an array), you just walk straight down the aisle. No random jumps. Much faster. 🚀 Arrays store data next to each other in memory (utilizing CPUs effectively). 🚀 HashMap stores data scattered in memory (flexible, but more movement). What I learned from this: 🚀 Small data → convenience matters → HashMap feels fast 🚀 Large data → memory locality matters → Arrays often win Big-O complexity is only part of the story. Real performance = Algorithm + Memory layout + Hardware behavior. Now whenever something is slow, I don’t just think about the algorithm — I also think about how the data lives in memory. 🚀 #DataStructures #Java #Performance #SoftwareEngineering #LearningInPublic
To view or add a comment, sign in
-
-
LeetCode 3129. : Find All Possible Stable Binary Arrays I. 🧠 Problem We are given three integers: zero → number of 0s one → number of 1s limit → maximum allowed consecutive identical elements A binary array is stable if: It contains exactly zero zeros and one ones. No subarray longer than limit contains only the same digit. In other words: ❗ No more than limit consecutive 0s or 1s. We need to count all such valid arrays. 💡 Approach This problem is solved using Dynamic Programming with Memoization. State definition: dp[z][o][last] Where: z = zeros remaining o = ones remaining last = last placed element (0 or 1) Key idea: When extending sequences, subtract invalid cases where limit + 1 consecutive elements appear. This avoids explicitly tracking streak length. 📊 Complexity Time Complexity: O(zero × one) Space Complexity: O(zero × one) Works efficiently within the constraint ≤ 200. 🎯 Result ✔ Accepted ⏱ Runtime: 20 ms 🏆 Beats 87.8% submissions #LeetCode #DynamicProgramming #Recursion #Memoization #Algorithms #LearningInPublic #CPlusPlus
To view or add a comment, sign in
-
-
I built a logistic regression classifier from scratch in C++. no libraries, just math and SIMD instructions. Over the past few weeks I went deep into how machine learning actually works at the lowest level. No scikit-learn, no PyTorch, just raw C++ and CPU intrinsics. Here's what the project covers: - Logistic regression from zero: sigmoid, binary cross-entropy, gradient descent, all implemented by hand. - SIMD optimization: wrote AVX2, SSE4.1, and scalar kernels for dot products and sigmoid - Runtime CPU dispatch: the code detects your CPU at startup and picks the fastest available kernel - Memory alignment: all data is 32-byte aligned so SIMD loads never fault - Python bindings: wrapped everything with pybind11 so you can use it directly with NumPy - Cross-platform: works on Linux, macOS, and Windows The biggest takeaway: understanding what happens between model.fit() and the actual CPU instructions changed how I think about performance and ML entirely. GitHub: https://lnkd.in/efQSgbWu If you're curious about what's really going on under the hood of ML libraries, I'd highly recommend trying something like this. #MachineLearning #CPP #SIMD #Python #LowLevel #FromScratch #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Deep Diving into C++ STL Containers & Algorithms As part of my ongoing DSA preparation, I recently completed a focused study of C++ STL containers and commonly used algorithms, understanding not just how they work — but when and why to use them. 📦 STL Containers Explored ✅ Sequential Containers vector → O(1) amortized push_back list → O(1) insert/delete (given iterator) deque → O(1) insertion at both ends ✅ Container Adaptors stack, queue → O(1) operations priority_queue → O(log n) push/pop ✅ Associative Containers (Balanced BST – log n) map, multimap, set, multiset ✅ Unordered Containers (Hash-based – avg O(1)) unordered_map, unordered_set Also worked with pair and learned practical differences between sequential vs associative vs unordered containers based on access patterns and complexity. ⚙️ STL Algorithms Practiced sort() → O(n log n) reverse() → O(n) next_permutation() → O(n) binary_search() → O(log n) swap(), min(), max() → O(1) Counting set bits This helped me write cleaner, optimized solutions while choosing the right data structure for each problem. Consistently strengthening fundamentals — one concept at a time. 💪 #DSA #CPP #STL #Algorithms #ProblemSolving #LearningInPublic #Consistency
To view or add a comment, sign in
-
Understanding Hash Tables Made Simple! Ever wondered why Hash Tables are so fast compared to Arrays or Linked Lists? When searching in: Array / Linked List -> We check elements one by one (O(n)) Hash Table -> We jump directly to the location using a hash function (O(1) average) Example: "Bob" -> Convert characters to Unicode -> Add them -> Apply modulo 275 % 10 = 5 -> Store "Bob" directly at index 5. No scanning. No shifting. Just direct access. That’s the power of hashing. Fast insertion Fast lookup Fast deletion Hash Tables are widely used in: • Databases • Caching systems • Compilers • Authentication systems • Indexing Mastering data structures like Hash Tables builds strong problem-solving foundations for backend development and system design. #DataStructures #HashTable #Algorithms #Programming #ComputerScience #BackendDevelopment #CodingJourney
To view or add a comment, sign in
-
-
🚀 Build a Hybrid Data Bridge for MetaTrader 5 with TraderMade In algorithmic trading, the quality of your strategy depends heavily on the data pipeline behind it. Missing ticks, incomplete historical data, or fragmented charts can easily distort backtests and affect live trading decisions. Our latest tutorial shows how to build a hybrid MT5 data plugin that combines historical and real-time market data using TraderMade APIs. 🔧 The solution integrates: TraderMade REST API for historical market data WebSocket streaming for sub-second live ticks A Python bridge server that manages data flow A lightweight MQL5 Expert Advisor to receive and display the data in MT5 This architecture enables traders and developers to backfill historical charts and seamlessly transition to live market data, ensuring accurate visualization and reliable strategy testing. 💡 Whether you're building trading algorithms, testing strategies, or improving chart accuracy, this hybrid approach creates a powerful low-latency data pipeline for MetaTrader 5. 📖 Full tutorial: https://lnkd.in/dWpP8c6u #AlgorithmicTrading #MetaTrader5 #ForexData #QuantTrading #Python #TradingTechnology #TraderMade
To view or add a comment, sign in
-
Nothing teaches backend engineering like production failures. You can read all the system design books you want… But the real learning happens at 2 AM When your system is down. I still remember one incident: Everything was working fine in staging. But in production? • APIs started timing out • CPU usage spiked • Logs were flooded The issue? A small change. A missing index in the database. One query slowed down → Which slowed down everything → Which crashed the system. Fix took 10 minutes. Debugging took 2 hours. Lesson: In backend systems, small mistakes don’t stay small. They amplify. That’s why: • Monitoring is not optional • Logs are your best friend • Database design matters more than code Because in production: 👉 You don’t rise to your knowledge 👉 You fall to your system design What’s your worst production incident? #BackendEngineering #SystemDesign #Production #SoftwareEngineering #Python
To view or add a comment, sign in
-
Explore related topics
- How to Achieve Clean Code Structure
- Clean Code Practices For Data Science Projects
- Clear Coding Practices for Mature Software Development
- How to Resolve Code Refactoring Issues
- Writing Clean, Dynamic Code in Software Development
- Coding Best Practices to Reduce Developer Mistakes
- Why Well-Structured Code Improves Project Scalability
- How to Approach Full-Stack Code Reviews
- How to Refactor Code Thoroughly
- Why Prioritize Aggressive Refactoring in Software Development
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development