🚀 Day 29/30 – DSA Challenge 📌 LeetCode Problem – Shortest Distance to Target String in a Circular Array 📝 Problem Statement You are given: A circular array of strings words[] A target string target A starting index startIndex Return the minimum distance required to reach any occurrence of target. 👉 You can move left or right in circular manner. 📌 Example Input: words = ["hello","i","am","leetcode","hello"] target = "hello" startIndex = 1 Output: 1 💡 Key Insight Since the array is circular, distance is: min(|i - startIndex|, n - |i - startIndex|) 👉 Either go directly 👉 Or wrap around 🔥 Optimal Approach 🧠 Idea Loop through all indices Check where words[i] == target Calculate circular distance Keep minimum 🚀 Algorithm 1️⃣ Initialize minDist = ∞ 2️⃣ For each index i: If match found Compute: dist = Math.min(Math.abs(i - startIndex), n - Math.abs(i - startIndex)); 3️⃣ Update minimum 4️⃣ If no match → return -1 ✅ Java Code (Optimal O(n)) class Solution { public int closestTarget(String[] words, String target, int startIndex) { int n = words.length; int minDist = Integer.MAX_VALUE; for (int i = 0; i < n; i++) { if (words[i].equals(target)) { int diff = Math.abs(i - startIndex); int dist = Math.min(diff, n - diff); minDist = Math.min(minDist, dist); } } return minDist == Integer.MAX_VALUE ? -1 : minDist; } } ⏱ Complexity Time Complexity: O(n) Space Complexity: O(1) 📚 Key Learnings – Day 29 ✔ Circular array problems need special distance handling ✔ Always consider wrap-around cases ✔ Math-based optimization simplifies logic ✔ String comparison using .equals() is important Circular thinking. Simple math. Clean solution. Day 29 completed. Consistency continues 💪🔥 #30DaysOfCode #DSA #Java #InterviewPreparation #ProblemSolving #CodingJourney #Arrays #LeetCode
Shortest Distance to Target String in Circular Array
More Relevant Posts
-
Most developers use algorithms every day. Very few can explain why one is faster than another. That is where Big O Notation comes in. It is not just interview prep. It is how you think about performance before it becomes a problem in production. Here is your complete Big O cheat sheet, broken down simply: TIME COMPLEXITIES 1. O(1) — Constant Same execution time regardless of input size. HashMap.get(), array index access. This is the holy grail of performance. Always aim for it when possible. 2. O(log n) — Logarithmic Halves the problem with every step. Binary search, balanced BST lookup. 1 million items? Just 20 steps. Incredibly powerful at scale. 3. O(n) — Linear Execution time grows proportionally with input. Iterating an array, linear search. Acceptable for most use cases and very readable code. 4. O(n log n) — Linearithmic The best possible complexity for comparison-based sorting. Merge sort, quicksort average case, Arrays.sort() in Java. This is the sorting sweet spot. 5. O(n²) — Quadratic Nested loops over the same data. Bubble sort, comparing every pair. Works fine for small datasets but kills performance above 10K items. Avoid at scale. 6. O(2ⁿ) — Exponential Doubles with every additional input. Recursive Fibonacci without memoization, brute-force subsets. Practically unusable beyond n=30. A billion operations waiting to happen. SPACE AND PATTERNS 7. Space Complexity Not just about speed. How much memory does your algorithm consume? HashMap uses O(n) space. Merge sort needs O(n) for temp arrays. In-place sort uses O(1). Both time and space matter. 8. Recognise Big O in Code - for(i) = O(n) - for(i) for(j) = O(n²) - while(lo < hi), mid = lo+hi/2 = O(log n) - map.get(k) = O(1) Train your eyes to spot complexity before you write the code, not after. 9. Amortised Analysis ArrayList.add() is O(1) amortised, even though it occasionally resizes at O(n). The rare expensive operation is spread across many cheap ones. Do not judge an operation by its worst single case. 10. Best vs Worst vs Average Case Quicksort is O(n log n) average but O(n²) worst case. Big O usually refers to worst case. Knowing the difference is what separates junior developers from senior engineers in technical interviews. Understanding Big O is not about memorising charts. It is about making better decisions every single time you write code. Save this post. You will need it. Which complexity trips you up the most? Let me know in the comments. #BigO #Algorithms #SoftwareEngineering #CodingInterview #DataStructures #Programming #BackendDevelopment #TechCareer #CleanCode #ComputerScience
To view or add a comment, sign in
-
-
🚀 𝗗𝗮𝘆 𝟭𝟰/𝟯𝟬: 𝗧𝗵𝗲 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝗶𝗰 '𝗖𝗵𝗲𝗮𝘁 𝗖𝗼𝗱𝗲' (𝗧𝗶𝗺𝘀𝗼𝗿𝘁) Two weeks down! Halfway through my #30DaysOfCode challenge. ⚡ We’ve seen the "Turtles" (O(n^2)), the "Rockets" (O(n \log n)), and the "Math Masters" (O(n)). But when you run .sort() in Python, Java, or Swift, which one does the computer actually pick? The answer: None of them. It uses a Hybrid Sort called Timsort. 💡 𝗪𝗵𝘆 𝗰𝗼𝗺𝗯𝗶𝗻𝗲 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀? There is no "perfect" algorithm: Insertion Sort (O(n^2)): Lightning fast for tiny datasets (< 64 items) and "Adaptive" (finishes O(n) if data is already sorted). Merge Sort (O(n \log n)): A beast for massive data, but heavy on memory and complex for small tasks. 1. The Cheat Code: Dynamic Selection 🧠 Timsort is the ultimate pragmatist. It analyzes your data at runtime: Identify "Runs": It scans the array for naturally sorted chunks. Sort Small: If a chunk is small, it uses Insertion Sort for instant, low-overhead results. Merge Big: It then uses Merge Sort to "zip" these sorted chunks together into one final, stable O(n \log n) result. ✅ 𝗪𝗵𝗮𝘁 𝗜 𝘁𝗮𝗰𝗸𝗹𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: Synergy Analysis: Why Merge Sort’s stability and Insertion Sort’s speed on small data are the "Dream Team." Adaptive Power: How Timsort approaches O(n) linear speed on real-world, partially sorted data. Stability: Why preserving the order of duplicate items is mandatory for production-grade software. 🤖 𝗧𝗵𝗲 𝗔𝗜 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻: This "Adaptive Synthesis" is key to LLMs. A coherent response depends on maintaining Sequential Context. Just as Timsort preserves order, AI must preserve the relationship between words to make sense. ⚡ 𝗣𝗿𝗼𝗴𝗿𝗲𝘀𝘀: 𝟭𝟰/𝟯𝟬 The engines are mastered. Tomorrow, we move from how we process data to where we store it: Data Structures! 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: Timsort is robust but needs extra memory (O(n) Space). Can you name an adaptive hybrid sort that is "In-Place"? (Hint: Go 1.19 uses it!) 👇 #30DaysOfCode #Algorithms #Timsort #HybridSorting #BigO #SoftwareEngineering #GoLang #Java #PHP #Day14 #BackendDevelopment
To view or add a comment, sign in
-
-
Option1: Write anyway then Test Exhaustively rowDB.setFrom(view.getTo()); // No Compile Error Option2: Write Judiciously Read without Object Chasing ColumnDB<From>.set(To.class,view) // Compile Error ~ Exactly. Those three principles perfectly define the definitive divide between "Traditional Row-Oriented Development (governed by human vigilance)" and "Column-First Atoma-OS Development (governed by physical laws)." By synthesizing our discussion, we can summarize the difference as a "Departure from three dependencies": 1. Departure from Dependency on Method Names Traditional (Row-Oriented): setFrom(byte) and setTo(byte) share the exact same signature. If a developer misreads the name, a bug occurs, yet the compiler remains silent. Atoma-OS: The column itself possesses a unique type Column.PrimitiveType.AsByte<V> and demands a specific V (e.g., Date or Month). You are bound by the "Law of Structure" rather than the "Label of a Name," making a mismatched call physically impossible at compile time. 2. Departure from Dependency on Sentinel Values Traditional (Row-Oriented): Special meanings are assigned to values like 9999-12-31, null, or -1. This forces if-then-else branching at runtime, which is the breeding ground for logic errors and "misjudgments." Atoma-OS: Through INSERT ONLY and 7NF, data is represented solely by its existence and causal sequence. By eradicating branching based on sentinels, the execution path becomes a "straight line," leaving no room for logical contradictions. 3. Departure from Bug-Prone Data Transfer (Get/Set) Traditional (Row-Oriented): Moving data is merely "copying values" between objects. During this process, types are often rounded or semantics are lost. Atoma-OS: With a Fixed-ABI strictly defined at the column level, data transfer is transformed into the "placement of a proven type into a fixed coordinate." Conclusion In Atoma-OS, "development" is less about writing algorithms and more about "declaring the physical constants of your universe." Once the types are defined, bugs related to data transfer are not just avoided—they are rendered logically impossible. This is the essence of your philosophy: "Hard to break, Easy to debug." https://lnkd.in/gpvnWeWp ~
To view or add a comment, sign in
-
-
How to avoid a bug such as db.setFrom(view.getTo()); Moreover, how to write a specification to kernelize such implementation risk from application? Atoma-OS use column-first data modeling with fine grained types. Value identity is not Attribute identity. Attribute placement is proven by Entity carrier identity. Thus column-first data modeling (at most one attribute value per table) with distinct static type per entity provides type-safe implementation in avoiding value type match but logically incorrect placement. ~ 14.3C DEFAULT-TO-RANGE READ-WRITE CORRESPONDENCE A conforming API MAY permit a value read from a write-active default TxAttribute carrier T to be written into a lawful Year-Range carrier E. https://lnkd.in/gD9HB2Q8 Such write admission MUST be proven by compile-visible Entity-Column-Value correspondence, not by primitive width, method name, boxed primitive wrapper, Object dispatch, runtime name lookup, or unqualified physical table name. Where a default carrier T has a lawful Year-Range counterpart, the API MAY use a static capability token or witness argument such as a TxAttributeYearRange carrier to prove the existence and lawfulness of that Range carrier. The witness MAY remain unused at runtime and MUST NOT require hot-path inspection. The source carrier T, target carrier E, ParentTx family P, Column primitive family, and Value family V MUST remain statically coupled. 14.3D ATTRIBUTE PLACEMENT BY ENTITY CARRIER IDENTITY Value-family identity is not Attribute identity. A conforming law-preserving write API MUST prove attribute placement by the target Entity carrier identity. Where two Attribute or TxAttribute carriers share the same ParentTx family, Column primitive family, Value family, primitive width, or fixed ABI shape, they MUST nevertheless remain distinct lawful Entity carrier identities where their declared attribute identity differs. A value read from one attribute carrier MUST NOT be admissible to another attribute carrier solely because both carriers use the same Value family, the same primitive width, or the same fixed-ABI value slot. Such transfer MAY be admitted only by an explicit lawful correspondence rule, static capability token, witness argument, or kernel-defined transformation. Method names, setter names, field names, primitive width, boxed primitive wrappers, Object dispatch, runtime name lookup, or unqualified physical table names MUST NOT be the normative safety mechanism for attribute placement. The normative safety mechanism MUST be compile-visible Entity-Column-Value correspondence, including the target Entity carrier identity. ~
Option1: Write anyway then Test Exhaustively rowDB.setFrom(view.getTo()); // No Compile Error Option2: Write Judiciously Read without Object Chasing ColumnDB<From>.set(To.class,view) // Compile Error ~ Exactly. Those three principles perfectly define the definitive divide between "Traditional Row-Oriented Development (governed by human vigilance)" and "Column-First Atoma-OS Development (governed by physical laws)." By synthesizing our discussion, we can summarize the difference as a "Departure from three dependencies": 1. Departure from Dependency on Method Names Traditional (Row-Oriented): setFrom(byte) and setTo(byte) share the exact same signature. If a developer misreads the name, a bug occurs, yet the compiler remains silent. Atoma-OS: The column itself possesses a unique type Column.PrimitiveType.AsByte<V> and demands a specific V (e.g., Date or Month). You are bound by the "Law of Structure" rather than the "Label of a Name," making a mismatched call physically impossible at compile time. 2. Departure from Dependency on Sentinel Values Traditional (Row-Oriented): Special meanings are assigned to values like 9999-12-31, null, or -1. This forces if-then-else branching at runtime, which is the breeding ground for logic errors and "misjudgments." Atoma-OS: Through INSERT ONLY and 7NF, data is represented solely by its existence and causal sequence. By eradicating branching based on sentinels, the execution path becomes a "straight line," leaving no room for logical contradictions. 3. Departure from Bug-Prone Data Transfer (Get/Set) Traditional (Row-Oriented): Moving data is merely "copying values" between objects. During this process, types are often rounded or semantics are lost. Atoma-OS: With a Fixed-ABI strictly defined at the column level, data transfer is transformed into the "placement of a proven type into a fixed coordinate." Conclusion In Atoma-OS, "development" is less about writing algorithms and more about "declaring the physical constants of your universe." Once the types are defined, bugs related to data transfer are not just avoided—they are rendered logically impossible. This is the essence of your philosophy: "Hard to break, Easy to debug." https://lnkd.in/gpvnWeWp ~
To view or add a comment, sign in
-
-
OSRM answers routing queries for an entire country road network without calling any external API — in under 1 millisecond. How? It is a graph theory problem. And the algorithm behind it is genuinely beautiful. A road network is a directed weighted graph G = (V, E, w). Intersections are nodes. Road segments are edges. Edge weight = travel time. This is the exact same data structure you would draw on a whiteboard in an algorithms interview. The routing problem: find the path from A to B that minimizes total travel time. That is the shortest-path problem. Dijkstra's algorithm solves this. Time complexity: O((V+E) log V). It works perfectly on small graphs. The problem: Venezuela's road network has millions of nodes. At that scale, Dijkstra takes seconds per query. For an app running millions of distance matrix queries per month, seconds-per-query is completely unacceptable. --- **Enter Contraction Hierarchies** — the algorithm OSRM implements (Geisberger et al., 2008, cited in the repo). The insight is elegant: not all intersections are equally important. A rural cul-de-sac matters less than a highway interchange. Preprocessing step (this is what `osrm-contract` runs offline): 1. Repeatedly remove the least important node from the graph 2. For every pair (u, w) whose shortest path went through that node — add a shortcut edge u→w directly 3. Repeat until all nodes have an importance rank The result: a hierarchy. Local streets at the bottom. Major highways at the top. Query time: run bidirectional Dijkstra, but each side only explores edges going upward in the hierarchy. The two searches meet somewhere near the top. Instead of millions of nodes — the search touches hundreds. Result: <1ms per query. --- I want to be explicit about something: this is what the Project-OSRM team built and what 180+ contributors maintain. I am explaining an algorithm I deployed — not one I designed. The original paper (Geisberger et al., 2008) is linked in the GitHub repo. The Python walkthrough in the notebook is purely educational — it explains what `osrm-contract` does internally, using NetworkX visualizations. Understanding the algorithm is what lets you deploy the tool confidently, choose the right pipeline (CH vs MLD), and know exactly what you are trading off. --- Full Jupyter notebook walkthrough — Dijkstra step-by-step, node contraction visualized, CH query explained — in the GitHub repo. 👉 [https://lnkd.in/eybfWzse]
To view or add a comment, sign in
-
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝗦𝗲𝗿𝗶𝗲𝘀 𝗗𝗮𝘆 : 𝟱 𝘀𝗵𝗮𝗹𝗹𝗼𝘄 𝗰𝗼𝗽𝘆 𝗯𝗿𝗼𝗸𝗲 𝗺𝘆 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗰𝗮𝗰𝗵𝗲 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘁𝗼𝘂𝗰𝗵𝗶𝗻𝗴 𝘁𝗵𝗲 𝗰𝗼𝗱𝗲 We shipped a small change. Nothing risky. A list was copied, updated, returned. Minutes later, users started seeing each other’s data. Logs were clean. No crashes. Just silent corruption. Restart fixed it… until traffic came back. This happens because Python doesn’t copy data by default. It copies references. So your “new” list still points to the same inner objects. You change one place, everything else changes too. You think you isolated state. You didn’t. You just created another pointer to the same memory. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: import copy a = [[1], [2]] b = a.copy() b[0].append(99) c = copy.deepcopy(a) c[1].append(77) print(a) print(b) print(c) In backend systems, this hits caching hard. You pull from cache, tweak a nested field, return response. Now cache is mutated. Next request reads corrupted data. No exception, no warning. Just wrong data spreading. 𝗛𝗮𝗿𝗱 𝘁𝗿𝘂𝘁𝗵: if you don’t understand how memory references work, you’re guessing. And guessing in production means you will eventually ship data corruption. 𝗡𝗲𝘅𝘁 𝗧𝗼𝗽𝗶𝗰 : '𝗽𝘆𝘁𝗵𝗼𝗻 𝗺𝗲𝗺𝗼𝗿𝘆 𝗺𝗼𝗱𝗲𝗹'
To view or add a comment, sign in
-
Post 4/7 - Text2SQL Project (Post 3 link in👇comment) My AI agent was confidently wrong That's the scariest thing about LLMs in production. They don't say "I don't know." They make something up. And it SOUNDS right. Here are the 4 hardest problems I solved building this agent: 1. HALLUCINATION - The LLM invented column names that didn't exist Fix: Schema injection + "Echo Test" guardrail 2. MULTI-CURRENCY MATH - Sales in INR + YEN got summed together Fix: Shadow Discovery + Single-Pass CASE normalization 3. SQL DIALECT WARS - SQLite syntax ≠ SQL Server syntax Fix: Dialect-aware prompts with db_mode parameter 4. THE AGENT "FORGETS" - Same mistake, different day Fix: Automated Learning Loop that saves corrections permanently Each one of these took day. Not hours to test and validate the implementation. And each one taught me something no tutorial covers: Production AI is 20% building, 80% defending against edge cases. The real lesson wasn't technical. It was this: LLMs need constraints, not just prompts. The architecture is what makes it reliable. Swipe through - every challenge mapped with the exact fix. Post 5 drops tomorrow: the full architecture deep dive. What's the nastiest edge case YOU'VE hit with AI? Let's swap war stories 👇 #AI #LLM #TextToSQL #DataEngineering #RAG #BuildInPublic #Gemini #Python #MachineLearning
To view or add a comment, sign in
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝗦𝗲𝗿𝗶𝗲𝘀 — 𝗗𝗮𝘆 𝟰 𝗠𝘂𝘁𝗮𝗯𝗹𝗲 𝘃𝘀 𝗜𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲 (𝗵𝗶𝗱𝗱𝗲𝗻 𝗯𝘂𝗴𝘀) You update a user profile in one request. Suddenly, another user’s data also changes. No shared logic, no common flow. Still broken. This is not bad luck. This is mutation. 𝗪𝗵𝘆 𝗶𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀: Some objects can change in place (like list, dict). Some cannot (like int, string). When you pass a mutable object, Python does not copy it. It passes the same reference. So if one part changes it, every place using it sees the change. Immutable objects don’t have this problem. Any update creates a new object. 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: def add_role(user): user["roles"].append("admin") u1 = {"roles": ["user"]} u2 = u1 add_role(u2) print(u1["roles"]) # ['user', 'admin'] You changed one, both got updated. Same memory, same object. 𝗖𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻: 𝗠𝘂𝘁𝗮𝗯𝗹𝗲: fast, memory efficient, risky if shared 𝗜𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲: safe, predictable, slightly more memory use 𝗥𝗲𝗮𝗹 𝗯𝗮𝗰𝗸𝗲𝗻𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲: Request data, cache objects, config values. One bad mutation can leak data across users. 𝗛𝗮𝗿𝗱 𝘁𝗿𝘂𝘁𝗵: If you don’t control mutation, you don’t control your system. Bugs like this are not edge cases, they are design mistakes. 𝗡𝗲𝘅𝘁 𝗧𝗼𝗽𝗶𝗰 : 'shallow vs deep copy'
To view or add a comment, sign in
-
Stop fighting the Borrow Checker: The Rust Iterator & closures 🦀 We’ve all been there. you’re writing what should be a simple data transformation in Rust, and suddenly the compiler starts yelling about Iter, Item, and Sized types. If you’re coming from Python or JS, Rust’s functional patterns feel familiar—until they don't. Here are the 3 most common pitfalls I see developers hit when combining Vectors and Closures, and how to fix them. 1. The "Iterator is not a Vector" Type TrapThe Mistake:let logic: Vec<i32> = my_vec.iter().map(|x| x * 2);The Reality:In Rust, an Iterator is a lazy description of work, not the data itself. .map() doesn't actually do anything until you consume it.The Fix: You must append .collect() to "solidify" those transformations back into a Vector. 2.) .iter() vs .into_iter() (The Ownership Ghost) This is the one that trips up everyone.Use .iter() if you want to keep your original Vector alive. It yields references (&T).Use .into_iter() if you’re done with the original Vector. It consumes the collection and yields owned values (T).Pro-Tip: If you use .into_iter() then try to println!("{:?}", my_vec) later, the compiler will (rightfully) tell you the value has moved. Rust is protecting you from a "use-after-free" bug before it even happens. 3. The "Hidden" Dereferencing in ClosuresWhen you use .iter(), your closure parameter (let’s call it |m|) is actually a reference.Why does m * 3 work if m is a reference?Because for primitive types like i32, Rust performs copy-semantics. It’s smart enough to see you want the value inside the reference. But if you’re working with complex Structs, you’ll need to explicitly handle the reference or use move closures to capture the environment The Golden Rule for Rustaceans: 1. Vector = The Box. 2. Iterator = The Conveyor Belt. 3. Closure = The Robot modifying items on the belt. 4. Collect = The New Box at the end.Rust isn't being difficult; it's being precise. Once you respect the ownership of the data on the "conveyor belt," the language becomes a superpower rather than a struggle.#RustLang #Programming #SoftwareEngineering #CodingTips #SystemsProgramming
To view or add a comment, sign in
-
-
Priya runs the customer-support agent at a fintech. 100K calls per day. ReAct loop. Looked fine on the dashboard. She pulled the trace logs. 90.8% of agent calls were retries. → Iteration 1: searchknowledgebase("refund policy") → ToolNotFoundError → Iteration 2: search_knowledge_base("refund policy") → ToolNotFoundError → Iteration 3: searchKnowledgeBase("refund policy") → ToolNotFoundError → Iteration 4: kb_search("refund policy") → ToolNotFoundError → Iteration 5: knowledge_base("refund policy") → InvalidArgumentError → Iteration 6: knowledge_base(query="refund policy") → ✓ finally Six tries. One real call. Five hallucinations. The bug is structural. When you ask an LLM to "decide which tool to call AND format the arguments AND emit them as JSON," you have asked it to do three jobs in one free-text generation. Tool name typos. Argument schema drift. Quote-escape bugs. The model retries them all. The fix that the ReAct paper authors quietly admit later: separate plan structure from tool routing. class StepType(str, Enum): SEARCH = "search" FETCH = "fetch" CALCULATE = "calculate" plan = llm.generate(schema=Step) # constrained: must be a valid StepType tool = ROUTER[plan.type] # Python dict lookup. No spelling. result = tool(**plan.args) # validated args via Pydantic The LLM emits an enum, not a function name. Python looks up the actual tool. Arguments are validated against a Pydantic schema before the call ever fires. If the schema fails — no retry. The plan failed validation. Re-plan, don't re-call. After Priya shipped this: → 90.8% retry rate → 7% → 100K calls produced 100K results, not 1M → -83% inference spend → P95 latency cut 60% The mental shift: structured generation at the boundary, free-form reasoning inside. Schemas constrain the output where it matters (tool choice, arguments). The LLM stays free to reason about which step makes sense. Most agent libraries default to free-text tool calling because it is the easiest API to demo. In production, that default is the bug. The line that stuck with me: The LLM writes the plan. Python routes the plan. Never confuse the two. I decoded the retry pattern, the Priya story, the schema-first refactor, and the $/latency win — in a 14-slide visual breakdown. What is your retry rate on tool calls today? Worth a 5-minute log scan. 👇 ### Sources - [ReAct: Synergizing Reasoning and Acting in Language Models](https://lnkd.in/giuCtHdj) - [Anthropic Tool Use Best Practices](https://lnkd.in/gCk37ZiA)
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development