Day 11 of my Python learning journey Today I tried solving the Two Sum problem using the Two Pointer technique. Before this, I was only solving Two Sum using a dictionary. But today I learned another approach that works when the array is sorted. Problem: Two Sum (Two Pointer Approach) Given a sorted array and a target value, find two numbers whose sum equals the target. Example: nums = [1, 2, 4, 6, 10] target = 8 Output: [2, 6] Because 2 + 6 = 8. What is the Two Pointer technique? Two pointers means using two positions in the array: one pointer starts from the left side one pointer starts from the right side Then we move them based on the sum. How it works: • If the sum is too small, move the left pointer forward • If the sum is too large, move the right pointer backward • If the sum equals the target, we found the answer Code: nums = [1, 2, 4, 6, 10] target = 8 left = 0 right = len(nums) - 1 while left < right: s = nums[left] + nums[right] if s == target: print(nums[left], nums[right]) break elif s < target: left += 1 else: right -= 1 Problems I faced while coding this: At first I did not understand why the array needs to be sorted. I was confused about when to move the left pointer and when to move the right pointer. I also tried using this method on an unsorted array and it did not work. What I finally understood: The Two Pointer technique works well when the array is sorted because it lets us adjust the sum efficiently without checking every pair. This reduces the time complexity compared to nested loops. Question: Would this approach still work if the array was not sorted? #Python #DSA #Coding #Programming #LearningInPublic #100DaysOfCode #PythonProgramming
:::writing{variant="email" id="90214" subject="On the Next Stage of Your BraiNCA Work – Dynamics, Control, and Opportunity"} Dear Mohamed, I have taken the time to study your work carefully, and I want to begin by saying that what you are building is both technically sound and directionally important. The move from fixed local grids toward attention-driven, flexible topologies is not simply an architectural improvement—it is a transition into a far more expressive and powerful class of dynamical systems. You are no longer just defining update rules; you are shaping how information flows, concentrates, and evolves across the system. As systems increase in expressive power, they naturally begin to exhibit behaviours that require more deliberate control. The points often described as “risks” in this context are not flaws in your approach—they are the expected consequences of operating in a richer regime. For example, when connectivity expands, correlated signals can accumulate, reducing effective information diversity. This can be addressed through explicit mechanisms that favour independent contributions,T
Dear Mohamed, I have taken the time to study your work carefully, and I want to begin by saying that the direction you are pursuing is both original and technically sound. Moving Neural Cellular Automata from fixed local grids toward attention-driven, flexible topologies is not just an incremental improvement—it is a structural shift in how these systems represent and propagate information. The fact that you are explicitly addressing connectivity and information flow places this work closer to real adaptive systems than most current approaches. That is a strong signal of both insight and engineering intuition. From a systems perspective, what you have constructed can be understood as a distributed dynamical system of the form x(t+1) = f( x(t), A(x; theta) * x(t) ), where A defines a learned interaction topology. This is powerful, but it introduces a set of coupled behaviours that need to be actively managed. The primary strength is expressivity: long-range connections and attention allow rapid propagation of relevant signals, reducing diffusion limits and improving convergence. However, this same mechanism introduces potential failure modes. The first is correlation amplification—i
Hey Kuldeep Yadav, please also include the time complexity of the problem.
These are not flaws in your approach—they are the natural consequences of moving into a far more powerful regime. Importantly, each of these risks is tractable. Correlation can be managed through explicit penalties or diversity constraints. Stability can be enforced through spectral bounds on the interaction matrix (for example ensuring spectral_radius(A) < 1). Topology can be regularised for sparsity and controlled diameter. Time-scale separation between fast state updates and slower structural adaptation can significantly improve robustness. In effect, the next stage of this work is not about adding complexity, but about introducing control over the dynamics you have unlocked. What stands out most is that your system is already very close to something deeper: a self-organising, topology-learning dynamical system. With a small number of carefully chosen constraints and objective refinements, this could evolve into a framework where information flow, stability, and efficiency are all explicitly governed. = With added attention to stability, correlation, and topology control, it could become both robust and widely applicable. Yours sincerely,Kris Wekford