🧪 Pytest Architecture That Actually Scales Refactored 500+ lines of tests this week. Here’s what separates messy from maintainable: 1) One database session fixture Stop creating sessions everywhere. Put a single fixture in your root conftest.py and reuse it. Per-file sessions = debugging nightmare. 2) ORMs cache everything Your ORM remembers old data even after APIs update it. Force a refresh after external changes or you’ll chase phantom bugs for hours. 3) External services ≠ your database That “test_user” you created? It still exists in Stripe/Auth0/etc even after your DB resets. Use random IDs or your second run will fail. 4) Build a fixture hierarchy - Root: infrastructure (database, HTTP client) - Feature: domain setup (users, products) - Test: just assertions No global variables. No module state. Clean layers. 5) Extract IDs before refresh If you’ll expire/refresh the session, grab any IDs first. Accessing an expired object = crash. 6) Small fixtures > god fixtures Don’t make a “setup_everything” fixture. Make small, composable fixtures. Your future teammates will actually understand what’s happening. The pattern: Treat test architecture like production code. Clear dependencies. Single responsibility. No shared state. Most test pain isn’t from pytest. It’s from not having an architecture. #python #testing #pytest #cleancode #tdd
How to Refactor 500+ Lines of Tests with Pytest
More Relevant Posts
-
🔥 Day 35/100 of #100DaysOfCode - Finding Longest Sequence! Today's Problem: Longest Consecutive Sequence Task: Find the length of the longest consecutive elements sequence in an unsorted array. Solution: Used a HashSet for O(1) lookups! First added all elements to the set, then for each number, checked if it's the start of a sequence (no num-1 in set). If yes, counted consecutive numbers ahead. Key Insights: O(n) time complexity by only checking sequence starters HashSet eliminates duplicates and provides fast lookups Avoids O(n log n) sorting approach Smart Optimization: Only begins counting from sequence starting points Each number is processed at most twice (visited in set iteration + sequence counting) Handles duplicates and empty arrays gracefully Elegant solution that transforms an O(n²) brute force into O(n) using smart data structure choice! 💡 #100DaysOfCode #LeetCode #Java #Algorithms #HashSet #Arrays #CodingInterview
To view or add a comment, sign in
-
-
#CodeTrip 2025 - The Domain Layer Contd. 3. AuditableEntity This is inheritance in action. Every table in your database should include common audit fields such as: CreatedBy, CreatedDate, ModifiedBy, ModifiedDate etc. Instead of repeating them in every entity, define them once in AuditableEntity and inherit it everywhere — the DRY principle (Don’t Repeat Yourself) at work 👏 Now, let’s take it one step further with concurrency control 👇 [Timestamp] public byte[] RowVersion { get; protected set; } = Array.Empty<byte>(); This property ensures Optimistic Concurrency — preventing multiple users or processes from accidentally overwriting each other’s changes. Here’s how it works: Every time a record is updated, Entity Framework updates the RowVersion (a binary timestamp). If someone else updates the same record in between, the row version will mismatch and throw a DbUpdateConcurrencyException. This allows your application to detect and handle conflicts gracefully, instead of silently losing data. In simple terms: It’s your safeguard against two engineers editing the same record at the same time — the second update will politely fail instead of overwriting the first. These abstractions — Entity, AuditableEntity, and DomainEvent — give your entire domain layer structure, safety, and resilience. Get this foundation right, and your architecture will scale effortlessly. Next, we’ll explore the Application Layer — where your use cases, handlers, and workflows come alive #CodeTrip #GoProcure #CleanArchitecture #DDD #SoftwareArchitecture #DotNet #Java #BuildInPublic #SoftwareEngineering #TechAndCodingWithOla
To view or add a comment, sign in
-
-
𝐋𝐞𝐞𝐭𝐂𝐨𝐝𝐞 𝐃𝐚𝐢𝐥𝐲 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞: 3228. 𝐌𝐚𝐱𝐢𝐦𝐮𝐦 𝐍𝐮𝐦𝐛𝐞𝐫 𝐨𝐟 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 𝐭𝐨 𝐌𝐨𝐯𝐞 𝐎𝐧𝐞𝐬 𝐭𝐨 𝐭𝐡𝐞 𝐄𝐧𝐝 This problem was a solid blend of string traversal, pattern observation, and counting logic. It highlights how understanding movement constraints in binary strings can simplify what initially looks like a simulation-heavy problem. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐮𝐦𝐦𝐚𝐫𝐲: We are given a binary string. In one operation, we can select a '1' that is immediately followed by a '0' and push that '1' as far right as possible until it hits another '1' or the end of the string. The goal is to compute the maximum number of such operations. 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Instead of simulating each movement, the key insight was to understand when a movement is allowed. Traverse the string and count how many '1's have appeared so far. Every time we encounter a '0' after some '1's, it contributes operations equal to the number of '1's on its left. This avoids unnecessary swaps and keeps the solution optimal. The entire logic comes down to: Count continuous '1's. Add that count whenever a movable '0' appears. 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬: Time Complexity: O(n) Space Complexity: O(1) 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬: Pattern-based counting can completely eliminate simulation-heavy logic. Movement-based problems often reduce to understanding relative ordering rather than performing every shift. A single-pass solution can emerge when you carefully track what each character contributes. This challenge strengthened my ability to break down string movement problems into pure counting logic rather than brute operations. #LeetCode #DSA #Java #ProblemSolving #CodingPractice #Algorithms #LearningJourney #Consistency
To view or add a comment, sign in
-
-
💡 Problem-Solving Post: Surrounded Regions in a Grid Today I worked on an interesting problem that really tested my understanding of graph traversal and boundary conditions. 🧩 Problem Summary: Given a 2D grid containing 'O's and 'X's, the goal is to replace all 'O's that are completely surrounded by 'X's. In other words, any 'O' that is not connected to the border should be converted into 'X'. ⚙️ Key Idea: At first glance, it looks like a simple replacement problem — but it’s not! The challenge is to differentiate between ‘O’s that are truly surrounded and those that are connected to the border. 🧠 Approach: Identify all 'O's on the borders (they can never be surrounded). From each border 'O', perform a DFS/BFS traversal to mark all connected 'O's as “safe.” After marking, flip the remaining 'O's (which are not safe) into 'X'. 🚀 What I Learned: The importance of thinking beyond the direct condition — sometimes the solution lies in what’s connected to the edge. How depth-first or breadth-first traversal can simplify problems that initially seem complex. A reminder that clean logic and clear reasoning often matter more than complex code. 👨💻 Complexity: Time: O(n × m) Space: O(n × m) (due to recursion or queue storage) It’s a simple yet elegant application of graph traversal that blends problem-solving and optimization thinking. Have you solved a similar boundary-based traversal problem recently? Would love to hear your approach or variations! #ProblemSolving #DSA #GraphAlgorithms #CodingJourney #LearningEveryday #Java
To view or add a comment, sign in
-
-
Vibe coding apps/platforms can give you a dump of code but none of the available options today dive deep enough to engineer such resilient patterns. I would suggest that use LLM's to learn about the topics in your preferred language/stack : 1. Basic Async – Synchronous vs asynchronous design, background workers, retries. 2. Message Broker – Redis Streams, Kafka, PostgreSQL; consumer groups, reliability. 3. Events – Event schema design, metadata, marshalling/unmarshalling. 4. Watermill Basics – Using Watermill for message handling, graceful shutdowns, health checks. 5. Middlewares – Logging, correlation IDs, dependency/config injection. 6. Errors – Error handling strategies, retries, malformed messages. 7. CQRS – Command–Query segregation, read models, decoupled persistence. 8. Outbox Pattern – Ensuring atomic DB-event publishing. 9. Idempotency – Safe reprocessing of duplicate messages. 10. Event Sourcing – Storing system state as an event log. 11. Process Managers (Sagas) – Coordinating multi-step workflows. 12. Resilience – Timeouts, backoffs, circuit breakers. 13. Message Ordering – Handling concurrency and partitioning. 14. Schema Evolution – Backward/forward compatibility of events. 15. Testing Event-Driven Systems – Unit and integration testing with brokers. 16. Observability – Metrics, tracing, logging context propagation. 17. Operations – Deployment, scaling, dead-letter queues. 18. Performance Tuning – Throughput vs latency tradeoffs. 19. Rebuilding Read Models – Event replay and data migration. #backend #designpatterns
To view or add a comment, sign in
-
🚀 Daily Problem-Solving Update Today’s grind was all about Dynamic Programming and Bit Manipulation, tackling two challenging problems head-on 👇 🧮 GFG POTD – Number of Paths in a Matrix with K Coins 🔹 Approach 1: Top-Down DP (3D DP) We define: dp[i][j][s] = number of ways to reach (i, j) with exactly s coins Transition: dp[i][j][s] = dp[i-1][j][s - mat[i][j]] + dp[i][j-1][s - mat[i][j]] (with bounds and sum checks) Base Case: dp[0][0][mat[0][0]] = 1 Answer: dp[n-1][m-1][k] 🧠 Complexity: Time → O(n * m * k) Space → O(n * m * k) 🔹 Approach 2: Space Optimization At any cell (i, j), we only need: Previous row: prev[j][sum] Current row’s previous cell: curr[j-1][sum] So we reduce it to two 2D arrays [m][k+1]: prev[j][sum] → previous row curr[j][sum] → current row After each row: prev = curr, then reset curr. ⚙️ TC: O(n * m * k) ⚙️ SC: O(m * k) ⚡ LeetCode Daily – Minimum One Bit Operations to Make Integer Zero (Hard) This one’s all about Gray Code transformations 🤯 Key Insight: 👉 The number of operations follows the inverse Gray code of n. Recall: Gray code: g = n ^ (n >> 1) To get binary back: n ^= n >> 1 n ^= n >> 2 n ^= n >> 4 n ^= n >> 8 n ^= n >> 16 🔹 Recursive Formula: f(0) = 0 If highest set bit = k: f(n) = (1 << (k + 1)) - 1 - f(n ^ (1 << k)) A beautiful recursive pattern derived from Gray code flipping behavior 🧩 🧠 Reflection: LeetCode’s been testing patience this week — 3 hard problems in 2 consecutive days 😅 But every “Hard” problem is another step toward a sharper mindset 💪 #DynamicProgramming #LeetCode #ProblemSolving #GFG #BitManipulation #DSA
To view or add a comment, sign in
-
🔥 Day 121 of My DSA Challenge – Remove Duplicates from Sorted List 🔷 Problem : 83. Remove Duplicates from Sorted List 🔷 Goal : Given the head of a sorted linked list, remove all duplicate nodes so that each element appears only once, while maintaining the sorted order. 🔷 Key Insight : This problem is a great example of linked list traversal and pointer management. Because the list is already sorted, all duplicates appear consecutively — which makes it easy to detect and skip them in one pass. The challenge is to handle links carefully so that only unique nodes remain connected. 🔷 Approach : 1️⃣ Initialize a dummy node (with a distinct value) to simplify linking. 2️⃣ Use a tail pointer to build a new list containing only unique elements. 3️⃣ Traverse the list using head: If head.val is not equal to the last added node (tail.val), link it. Otherwise, skip the duplicate. 4️⃣ Update tail.next = null at the end to avoid leftover links. 5️⃣ Return dummy.next as the new head. Time Complexity: O(n) Space Complexity: O(1) This problem strengthens concepts of : ✅ Duplicate handling in linked lists ✅ Pointer-based traversal ✅ Clean in-place modification Sometimes, cleaning a data structure is just about connecting only what truly matters. ⚡ Every small pattern makes a big difference in problem-solving clarity. One more concept locked in 💻 #Day121 #DSA #100DaysOfCode #LeetCode #Java #LinkedList #ProblemSolving #Algorithms #DataStructures #CodingChallenge #DeveloperJourney #EngineerMindset #GrowEveryday
To view or add a comment, sign in
-
-
A script crashes. An application recovers. This is the difference. #ZeroToFullStackAI Day 7/135: Mastering Error Handling. For the past two days, we’ve tackled `ValueError`—the dreaded "crash" in our calculator challenge. Our code worked flawlessly... until it didn’t. It lacked resilience against invalid input. Today, we fix that with a safety net. Enter Error Handling using the `try/except` construct. 1. `try` block: The "risk zone." We place potentially failing code here (e.g., `int(input())`). The program attempts execution, ready for disruption. 2. `except` block: The "fallback." It stays dormant... until the `try` block raises an error. It then intercepts the specific exception (`ValueError`) and executes graceful recovery logic—keeping the app alive. This embodies Defensive Programming: cleanly separating the Happy Path (smooth execution) from the Unhappy Path (failure scenarios), ensuring robust, user-friendly behavior. We’ve now laid the three foundational pillars of software: 1. Primitives (State & Type) 2. Control Flow (Logic) 3. Error Handling (Robustness) With this bedrock in place, we’re primed to scale. Tomorrow: our first true data structure—the List. #Python #DataScience #SoftwareEngineering #AI #Developer #ErrorHandling
To view or add a comment, sign in
-
-
🚀 Day 62 – In-Place Array Filtering with Clean Two-Pointer Logic 🧩 Problem: 27. Remove Element The task is to remove all occurrences of a given value from the array in-place and return the new valid length. Only the first k elements matter after removal, and we must avoid using extra space making this a good test of controlled index manipulation. 🧠 Approach To solve this efficiently, I used an insertion-based two-pointer strategy, which ensures both clarity and optimal performance. I maintained an insertPos pointer that always marks the next position where a valid element should be placed. As I iterated through the array, every element that wasn’t equal to the given value was copied to this position, and the pointer moved forward. This approach ensures: 👉 A single linear pass through the array 👉 Strict in-place modification 👉 Clean separation of valid and removed elements Full control over the final valid length The resulting length is simply the total size minus the number of removed elements, giving a direct and efficient solution. This method clearly shows structured thinking, control over memory usage, and the ability to simplify a problem into predictable pointer movements — exactly what strong algorithmic reasoning looks like. 🔗 Problem Link: https://lnkd.in/gAakjJqC 🔗 GitHub Link: https://lnkd.in/gCe-A-Ev 💡 Reflection: Working on this problem reminded me how powerful a simple pointer and a clear goal can be. Not every challenge needs complex logic sometimes the cleanest solutions come from knowing exactly how you want the data to look and moving toward it step by step. This problem reinforced the value of in-place changes, memory efficiency, and how clean thinking leads to clean code. #LeetCode #Arrays #TwoPointers #InPlaceAlgorithm #Java #Day62 #CodingJourney #100DaysChallenge #ProblemSolving #LearningEveryday #CleanCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development