Shallow Copy vs Deep Copy — The 2 AM Bug Trap 🛑 Most developers think they understand copying objects, until their original data mysteriously changes. That’s not a bug, that’s memory behavior biting you. → Shallow Copy Creates a new container, but nested objects are still shared (by reference) 👉 Change nested data → both copies change. Best for: Flat, simple data. → Deep Copy Creates a completely independent clone, everything is copied recursively. 👉 Change anything → original stays untouched Best for: Complex, nested structures. 💡 Rule of Thumb Shallow → when you only need a surface-level copy Deep → when you need true isolation ⚠️ The real trap: Most bugs aren’t syntax errors. They come from not understanding how data behaves in memory. If you’ve ever spent hours debugging only to realize it was a shallow copy issue. Welcome to the club 😄 #Python #Python3 #Programming #SoftwareEngineering #CleanCode #Debugging #TechTips #PythonDeveloper #BackendDevelopment
Shallow vs Deep Copy in Python: Avoid the 2 AM Bug Trap
More Relevant Posts
-
🚀 𝗖𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 “𝗕𝗲𝘀𝘁 𝗧𝗶𝗺𝗲 𝘁𝗼 𝗕𝘂𝘆 𝗮𝗻𝗱 𝗦𝗲𝗹𝗹 𝗦𝘁𝗼𝗰𝗸” 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 💡 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁 You’re given stock prices where: prices[i] = price on day i 👉 Goal: Buy once and sell once (in the future) to get maximum profit 📌 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 Input: [4, 2, 3, 4, 5, 2] Output: 3 ✔ Buy at 2 → Sell at 5 🧠 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟭: 𝗕𝗿𝘂𝘁𝗲 𝗙𝗼𝗿𝗰𝗲 (𝗢(n²)) Check every possible pair of buy & sell days ❌ Inefficient for large data ⚡ 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟮: 𝗧𝘄𝗼 𝗣𝗼𝗶𝗻𝘁𝗲𝗿 / 𝗦𝗹𝗶𝗱𝗶𝗻𝗴 𝗪𝗶𝗻𝗱𝗼𝘄 (𝗢(n)) Track buy and sell pointers Update buy when a smaller price appears ✔ Better performance with linear time 🔥 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟯: 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 (𝗚𝗿𝗲𝗲𝗱𝘆 - 𝗢(n)) Track minimum price so far Calculate profit at each step ✔ Most efficient and clean solution 💻 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 𝗖𝗼𝗱𝗲 def optimal_stock(prices): min_price = float("inf") max_profit = 0 for price in prices: min_price = min(min_price, price) profit = price - min_price max_profit = max(max_profit, profit) return max_profit 🔗 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗱𝗲: https://lnkd.in/g-iaHxs5 🎯 𝗞𝗲𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Always track the minimum before maximum Greedy approach often gives optimal results in linear time 💬 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗧𝗶𝗽 Start with brute force → optimize step by step This shows strong problem-solving skills 💡 #DataStructures #Algorithms #CodingInterview #Python #LeetCode #SoftwareEngineering #ProblemSolving #GreedyAlgorithm
To view or add a comment, sign in
-
-
Just shipped DocuQuery v2.0 — a production-grade RAG Document Intelligence System. Most RAG tutorials stop at: query → vector search → LLM → answer. I didn't. Here's what v2.0 actually does: → Parent-Child Retrieval Instead of fixed 500-char chunks that cut through sentences, I use 150-char child chunks for precise search and 1500-char parent chunks for full context to the LLM. High precision without losing context. → Hybrid Search + Cross-Encoder Reranking Pure vector search fails on exact keywords like error codes and IDs. I run BM25 (keyword) and ChromaDB (semantic) simultaneously, combine scores with weighted alpha, retrieve 20 candidates, then rerank with a Cross-Encoder that reads query and chunk together — far more accurate than cosine similarity alone. → Query Rewriting + HyDE Users type messy queries. Before any search, an LLM rewrites the query into a clean retrieval-optimized form. I also implemented Hypothetical Document Embeddings — the LLM generates a fake ideal answer, we embed that to search. Bridges the vocabulary gap between user questions and document text. → Self-Reflective RAG with LangGraph The entire pipeline runs as a stateful graph. After retrieval, an evaluation agent checks if chunks are relevant — if not, it loops back and retries. After generation, a hallucination checker verifies every claim against source text before showing the answer. Same LangGraph architecture I used in ProcureIQ. Stack: LangGraph · LangChain · ChromaDB · BM25 · CrossEncoder · Groq Llama 3 · Streamlit Deployed on Streamlit Cloud. Code on GitHub. This is what separates a portfolio project from a tutorial copy. #RAG #LangGraph #LLM #GenerativeAI #MachineLearning #Python #Streamlit #BuildInPublic
To view or add a comment, sign in
-
-
🚀 Day 15/60 – Lambda Functions (Write Functions in One Line ⚡) Yesterday you learned Dictionary Comprehension. Today, let’s make functions shorter and smarter 👇 🧠 What is a Lambda Function? A small anonymous function written in a single line. 👉 No name 👉 No def keyword 👉 Just quick & powerful ❌ Traditional Function def square(x): return x * x print(square(5)) ✅ Lambda Function square = lambda x: x * x print(square(5)) 👉 Same result, less code ⚡ 🔍 Multiple Arguments add = lambda a, b: a + b print(add(3, 4)) ⚡ Real Use Case (with map) numbers = [1, 2, 3, 4] squares = list(map(lambda x: x * x, numbers)) print(squares) 🔥 With filter numbers = [1, 2, 3, 4, 5, 6] evens = list(filter(lambda x: x % 2 == 0, numbers)) print(evens) ❌ Common Mistake Trying to use lambda for complex logic ❌ 👉 Keep it simple and readable 🔥 Pro Tip Use lambda when: ✅ Function is short & simple ❌ Avoid for large or complex logic 🔥 Challenge for today 👉 Create a lambda function 👉 That takes a number 👉 Returns its cube Comment “DONE” when finished ✅ #Python #PythonProgramming #LearnPython #Coding #Programming #Developer #SoftwareEngineering
To view or add a comment, sign in
-
-
From Analysis to Action: Building a Predictive Dashboard for A/L Performance 🚀💻 Following our research on the GCE A/L results, I wanted to make our findings interactive and accessible. I developed this Streamlit Web Application to transform static data into a dynamic exploration tool. Features I’m most excited about: ⚖️ Equity Simulation: An interactive tab using Lorenz Curves and Gini Coefficients to visualize achievement distribution. 🔮 Z-Score Predictor: A real-time tool that estimates performance based on subject marks and background features. 🎨 User Experience: Designed to make complex Exploratory Data Analysis (EDA) intuitive for any user. Building this helped me bridge the gap between statistical modeling and software deployment. It's one thing to find an insight—it's another to build a tool that lets others find them too! Check out the demo below! 👇 #Streamlit #Python #DataVisualization #WebDev #DataScience #AppliedStatistics #Portfolio #Coding
To view or add a comment, sign in
-
Stock analysis shouldn’t require five different tools. You can now use Quadratic to ask for real market data in plain language, generate Python automatically, and turn prices, fundamentals, financial statements, and technical indicators into charts and analysis inside one spreadsheet. No API keys. No plugins. No external setup. Just open Quadratic and start analyzing. See how it works: https://lnkd.in/eq2hMUFm
To view or add a comment, sign in
-
Built a local LLM benchmarking dashboard. Here's what it does and why it matters: 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Run any prompt → measure response time → compare answers side by side 𝗪𝗵𝘆 𝗱𝗼𝗲𝘀 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿? In real world scenarios, choosing the right model is one of the most important decisions you make. A model that is too slow will frustrate users. A model that is too small might give poor quality answers. This project teaches you how to make that decision using data — not guesswork. -Speed matters, a 30 second response time is unacceptable in most products -Quality matters, a fast model that gives wrong answers is useless -Cost matters, larger models use more compute and cost more to run -The right model depends on the use case, there is no single best model 𝗠𝗼𝗱𝗲𝗹𝘀 𝘁𝗲𝘀𝘁𝗲𝗱: llama3.2 (2GB) · phi4-mini (3GB) · qwen3.5:9b (6.6GB) All using Q4_K_M quantization 𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁: Model size alone does not determine speed. Architecture, quantization format, and memory loading all play a role. Always benchmark on your own hardware — results vary by machine. 𝗧𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸: Python · Ollama · Streamlit Runs fully locally — zero API costs. GitHub: https://lnkd.in/drwCqcgJ #AIEngineering #LLM #Benchmarking #Ollama #Python #BuildingInPublic
To view or add a comment, sign in
-
Hello dudes and dudettes!! 🚀 Day 12/150 — Solved LeetCode 380: Insert Delete GetRandom O(1) Today’s problem felt like a real brain workout 🧠 — not because it was long, but because it demanded the right idea. At first, it looks simple: 👉 Insert 👉 Delete 👉 Get Random But the catch? ⚡ All operations must run in O(1) time That’s where things get interesting. 🧠 Initial Thought Process Using a list? Insert ✅ Get random ✅ Remove ❌ (takes O(n)) Using a set? Insert ✅ Remove ✅ Get random ❌ So clearly… one data structure alone isn’t enough. 💡 The Breakthrough Moment The solution clicked when I realized: 👉 Why not combine the strengths of both? Use a list for fast random access Use a hash map for instant lookups This combination unlocks true O(1) performance for all operations. 🔥 The Most Interesting Part — Deletion Trick Normally, removing an element from a list is expensive because elements need to shift. But here’s the smart trick: 👉 Swap the element to be removed with the last element 👉 Remove the last element 👉 Update the index in the hash map That’s it. No shifting. No extra cost. 💥 Constant time deletion achieved. 📊 How It Works (Simple Flow) Imagine storing values like this: A list keeps all elements A map stores each value’s index Whenever you: Insert → add to list + store index Remove → swap + pop + update index GetRandom → pick directly from list Everything stays efficient and clean. 😎 What I Learned Sometimes one data structure isn’t enough — combining them is the real power Smart tricks (like swap & pop) can completely change time complexity Designing systems is more about thinking than coding 🎯 Key Takeaway “Efficiency isn’t about doing things faster… it’s about avoiding unnecessary work.” 🔥 Another solid step forward in the journey. On to the next challenge. #LeetCode #Algorithms #DataStructures #ProblemSolving #CodingJourney #100DaysOfCode #Python #LearningInPublic
To view or add a comment, sign in
-
-
I wrote this piece because I was tired of seeing data scientists (myself included) waste the first two hours of a project writing the same boilerplate code. We've all been there: df.head(), df.isnull().sum(), squinting at correlation heatmaps, and writing yet another snippet to check distributions. It's plumbing, not science. ydata-profiling changed my workflow completely, and I wanted to share exactly how I use it and, just as importantly, when I don't use it. If you're in the Python/data science world and haven't given this library a spin yet, I hope this gives you back some of your mental bandwidth. Let me know what your go-to EDA tool is in the comments! #DataScience #Python #EDA #MachineLearning #ydata #DataAnalytics #OpenSource #Productivity #TechWriting #DataQuality
To view or add a comment, sign in
-
I spent a weekend building a tool I actually needed : A PDF-to-Flashcard pipeline that runs 100% locally. The Win: No subscriptions, no data exposure, and zero latency. Just Python and local intelligence. The Stack: → PyMuPDF : Clean text extraction → Ollama to run Llama 3 locally:: High-performance local LLM. → Streamlit for the interface (and Sithara Hayavadana — the standalone local UI is genuinely great for this kind of project) → Pandas: Instant Anki-compatible CSV exports. The Biggest Learning: Data preparation beats model size every time. I found that chunking strategy mattered more than prompt engineering or model choice. The stack is entirely free — and yes, Keming Wang, free and open source tools were enough to buIld this 😁 I have shared the full article and technical breakdown in the comments below! 👇 Have you experimented with Ollama for your local workflows yet?
To view or add a comment, sign in
-
-
I’ve just published a project based on a case I previously worked on. Using synthetic data sources modeled on the structure of the real ones, I built an automated analysis pipeline that reproduces the workflow end to end: from data ingestion and cleaning, to analysis, to generating a report and slide deck similar to the ones I created in the original case. What I wanted to explore was not only the analysis itself, but also how this kind of work can be made more repeatable, transparent, and easier to maintain. Instead of keeping the process as a one-off piece of analysis, I turned it into something that can be rerun and reviewed more systematically. The project includes: - automated data processing and KPI analysis - generated outputs and visualizations - a report and presentation workflow - synthetic data only, so no real case data is exposed It was a good exercise in turning practical analytical work into a more reproducible pipeline, while staying close to the type of deliverables used in a real project. Repo: https://lnkd.in/es6h6SxW #Python #DataAnalytics #Automation #Reporting #HealthcareAnalytics #PortfolioProject
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development