Lunchtime gone, swallowed by a *conda activate* error in PowerShell that I'd been ignoring for weeks. Debugged it with Anthropic Claude: first time in nearly a decade I've gone this deep into environment variables and shell module internals. stray quotes in PATH --> no, missing Out-String in conda's PowerShell wrapper --> a regex I had added in an earlier fix was silently corrupting conda's output. The classic case of a fix leading to another error down the line. The rate of improvement of these tools is terrifyingly brilliant, but the power of incorporating them into your tasks is undeniable. #Claude #Python #Debugging
Francois Kruger’s Post
More Relevant Posts
-
𝗧𝗵𝗲 𝗣𝘆𝘁𝗵𝗼𝗻 "𝗥𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲" 𝗧𝗿𝗮𝗽 I noticed something interesting today : changed a value inside a function, and it reflected outside too I didn’t return anything. I didn’t re-assign the variable ➡️ 𝗧𝗵𝗲 𝗖𝗮𝘁𝗰𝗵 : Python functions don't always create a new "𝗰𝗼𝗽𝘆" of your data Instead, they often work with 𝗮 𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲 to the original object ▪️𝗠𝘂𝘁𝗮𝗯𝗹𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝘀 (Lists, Dicts, Sets) are modified in place. Any change inside the function affects the original data directly. ▪️𝗜𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝘀 (Integers, Strings,Tuples) are safe because they can't be changed in place. ➡️ 𝗧𝗵𝗲 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 : If you’re working with Lists or Dictionaries and want to keep your original data safe, you must be explicit: update(my_list.copy()) Small detail, but missing it can lead to hours of debugging bugs #Python #30DaysOfCode #SoftwareEngineering #LearningInPublic #Day19
To view or add a comment, sign in
-
-
🚀 Day 66 / 200 – Consistency is the real power 💡 Today’s challenge: **Implementing Pow(x, n)** using an optimized approach ⚡ Instead of the brute-force method, I used **Binary Exponentiation (Fast Power)** to reduce the time complexity from O(n) to O(log n). This approach efficiently handles both positive and negative powers, making the solution scalable and interview-ready. 🔹 Key Learnings: * Breaking problems into smaller subproblems improves efficiency * Handling edge cases (like negative exponents) is crucial * Optimization matters just as much as correctness 📊 Result: ✔️ 307 / 307 test cases passed ⚡ Runtime: 1 ms 🧠 Memory Efficiency: Top 2% Small improvements every day lead to big results over time. Staying consistent and sharpening problem-solving skills one challenge at a time 💪 #Day66 #100DaysOfCode #200DaysChallenge #LeetCode #Python #DSA #CodingJourney #ProblemSolving #Consistency
To view or add a comment, sign in
-
-
Built a Python-based Directory Sync Tool to compare and synchronize files between two directories with reliability and control. Instead of relying only on file names or timestamps, the tool uses a combination of metadata and SHA-256 hashing to accurately detect new, modified, and missing files. Key highlights: • Recursive directory scanning with structured metadata (name, extensions, size, hash) • Efficient change detection using size-first filtering followed by hash comparison • Memory-efficient hashing using chunk-based file reading (handles large files) • Synchronization support with metadata preservation using shutil.copy2 • Safe cleanup by optionally removing extra files from the destination While building this, I focused on moving beyond a basic script and treating it like a real tool, structuring the code into clear components, improving output readability, and adding validation and error handling to make it more reliable in real use. GitHub:https://lnkd.in/gt-Ec3rF #Python #CLI #GitHubProjects #SoftwareDevelopment #LearningByBuilding #SystemsThinking
To view or add a comment, sign in
-
A 40ms API became a 4ms API. Here's the only thing that changed. We were making 3 separate DB queries to assemble a response. Each was fast in isolation. Together, they were sequential — each waited for the previous. The fix: run them concurrently. In Python (asyncio), this went from: result_a = await get_a() result_b = await get_b() result_c = await get_c() To: result_a, result_b, result_c = await asyncio.gather(get_a(), get_b(), get_c()) That's it. No caching, no infra change, no complex refactor. The mental model that helps: always ask "are these operations actually dependent on each other?" before assuming they need to run in sequence. Most API latency problems aren't hard — they're just unexamined. #BackendDevelopment #PythonAsyncio #APIOptimization #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Day 75 of #100DaysOfCode 🔥 LeetCode 179 – Largest Number 💡 Problem: Given a list of non-negative integers, arrange them such that they form the largest possible number. 🧠 Key Insight: Normal sorting won't work here ❌ We need a custom comparator based on string concatenation. 👉 Compare: - ""a + b"" vs ""b + a"" - Whichever gives a larger value should come first. ⚙️ Approach: 1. Convert numbers to strings 2. Sort using custom comparison logic 3. Join the result 4. Handle edge case (like "[0,0] → "0"") ⚡ Complexity: - Time: O(n log n) - Space: O(n) 🎯 Result: ✅ Accepted ⚡ Runtime: 0 ms (100%) 📌 Lesson Learned: Sometimes sorting logic depends on combination, not value. #LeetCode #Python #CodingJourney #DSA #100DaysOfCode #Sorting #ProblemSolving
To view or add a comment, sign in
-
-
I used to think tuples were just “lists with stricter rules”… but today showed me they have their own vibe. 🐍 Day 06 of my #30DaysOfPython journey was all about tuples, and this topic made one thing really clear: sometimes the best data structure is the one that stays put. A tuple is an ordered and unchangeable collection of different data types, created using round brackets (). Today I explored: 1. Creating tuples with tuple() 2. Accessing items using positive and negative indexing 3. Slicing tuples with positive and negative indexes 4. Checking whether an item exists using in 5. Counting items with count() 6. Finding item positions with index() 7. Joining tuples using + operator 8. Converting tuples to lists with list() 9. Deleting the whole tuple using del What stood out to me today was how tuples are built for stability. They are not meant to be edited over and over again — and that actually makes them really useful when you want data to stay consistent. One more day, one more topic, one more layer of Python making sense. Github Link - https://lnkd.in/gHwugKTU #Python #LearnPython #CodingJourney #30DaysOfPython #Programming #DeveloperJourney
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗗𝗕 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗹𝗲𝗮𝗸𝗲𝗱 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻. 𝗧𝗵𝗲 𝗳𝗶𝗹𝗲 𝗵𝗮𝗻𝗱𝗹𝗲 𝘀𝘁𝗮𝘆𝗲𝗱 𝗼𝗽𝗲𝗻. 𝗧𝗵𝗲 𝗿𝗼𝘄 𝗹𝗼𝗰𝗸 𝗵𝘂𝗻𝗴 𝗶𝗻𝗱𝗲𝗳𝗶𝗻𝗶𝘁𝗲𝗹𝘆. These aren't edge cases—they are the inevitable result of making resource cleanup the caller's responsibility. In Python, 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗿𝘀 move that responsibility from the developer to the type itself. The resource becomes self-healing. 🔹 __exit__ is called even if an exception is raised—that is the safety guarantee. 🔹 @contextmanager lets you write the same protocol with 'yield'—no class needed. 🔹 Any resource with an acquire/release lifecycle belongs in a context manager. The 𝘸𝘪𝘵𝘩 statement isn't just syntactic sugar—it’s a contract. The caller writes business logic; the object handles the cleanup. #Python #SoftwareEngineering #BackendDevelopment #SoftwareArchitecture #CleanCode
To view or add a comment, sign in
-
-
Why I am paranoid about schema drift. When you build a pipeline like Shop Pulse, everything works perfectly as long as the data looks exactly like you expect. But in the real world, someone always changes a column name or a data type without telling you. That is why I have been focusing on schema enforcement. If a bad record hits your Spark job and you haven't handled it, the whole pipeline crashes at 3 AM. I’ve started implementing validation layers that catch these changes before they pollute the Delta Lake. It is more work upfront, but it is the only way to sleep peacefully knowing your data is actually reliable. #DataEngineering #ApacheSpark #DataQuality #Python #BackendDevelopment
To view or add a comment, sign in
-
-
Spent 5 days chasing ghosts—DLL hell and ABI mismatches. I followed the agentic debugger down the wrong path as it hallucinated at a wrong layer because it misread the WinError 1114 as a load-path issue rather than a missing export. The actual fix was two lines. I used TORCH_LIBRARY when I needed PYBIND11_MODULE. The Architecture Gap: - Use TORCH_LIBRARY to register ops into the PyTorch C++ Dispatcher (accessed via torch.ops). It fires static C++ constructors at DLL load time but does not create a PyInit_* function. Python can't "see" it as a module. - Use PYBIND11_MODULE to generate the standard Python C Extension entry point. This generates the PyInit_{name} entry point Python needs to "see" the module. The error was literal: "dynamic module does not define module export function." No PyInit_* existed because TORCH_LIBRARY isn't meant to be imported directly. {just correcting the record} #CPP #PyTorch #SystemsProgramming #MachineLearning #barebones #3D
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development