Get ready for a new 'Technically Speaking' episode, part 2 of the Mocking Responsibly series is live now!
Our teammate Matt explores specialized mocking using the Python responses library. The walk-through compares this approach with general-purpose mocking, showing how tool-specific mocks can reduce coupling between tests and implementation details while allowing safer refactoring. Check it out here: https://lnkd.in/gDmBEpcP#UnitTesting#Python#SoftwareEngineering
Hi, this is Matt Morrison with Technically Speaking at Source Allies. Today we're going to be talking about mocking responsibly. This is Part 2 of a series, so if you haven't seen Part 1 yet, check out the link below to get caught up. We're going to pick up right where we left off on Part 1 again today. We will be using Python. Haley, if you don't mind. Thank you. Last time we talked about what I referred to as general purpose mocking. In that example, we were changing how Python functions worked while our tests were running. Today I'm going to be talking about what I'm calling specific specialized mocking. It's specific because it's going to be something that works for a specific tool. Or library, in our case the requests library. And it's specialized because it's not going to mock Python functions the way that general purpose mocking would. It's going to only take care of mocking web requests. In our example today, we are going to use a library called Responses, which is a testing tool for the Requests library. Depending on your use case, you may find it advantageous to take advantage of a specific specialized tool. It may clean up your tests and allow you more freedom and refactoring. That is, if you're all right with the trade. Off of using something that's specific to the tool that you've chosen. More on that trade off later. OK, let's jump into our code. Quick, we'll do a recap of where we left things in Part 1. So we've got our tests, 2 tests. We've got our one real test and our one mocked test. And in this case we are mocking the requests library and our application looks like this. We've commented out the URL Lib syntax and dropped in the requests syntax and we're using this response dot text and just to show that everything is passing. Everything is passing. OK. So let's take a look at using responses to mock our application a little bit differently than what we were doing with. Our first example here. OK. So I've updated our tests, we're importing responses. We are activating responses, so this is essentially going to be similar to the mock patch decorator where for the duration of the test the responses that I set up will be. In effect, essentially. And then down in our test. I'm saying responses.gitandivegotoursourceallies.com URL and a party of mock text. That I'm calling our applications get description function and asserting that the description that is returned contains the mock text that we set up here. So let's save that. And run it. And it passes. Perfect. That was easy. Let's take a look at one of the tradeoffs that we had with the other form of mocking, where we were using the general purpose mocking. Was that we couldn't freely refactor. When we figured out that we could do text instead of content dot decode, we had to update our mocks and that's not ideal. Because I want my tests there to tell me that my refactoring is successful. And if every time I refactor I also have to change my tests? That doesn't give me a lot of confidence that I was successful in refactoring. Ideally, I can refactor as much as I want in my test, will stay green the whole time, and if they. If they fail, that tells me I've potentially done something wrong. So let's do that here. Let's say for whatever reason, let's say text, let's say there's some performance issues with that. We want to get rid of that, and I want to refactor to use. I guess if there was performance issues that wouldn't be refactoring. Maybe that's a topic for another discussion. Let's just refactor. Let's let's say this is just a a personal preference and I'm going to do content dot decode. And it works so. We've got an advantage here over our previous method where we can. Freely refactor now and our test will still pass. So let's break our test once. Let's do the same thing that we did in Part 1, where we inadvertently removed the eye from the URL. And look at that, we have two failing tests. We have our real test that's failing with a connection error. And we also have our mock test that is also failing with the connection error. But this one, if you look a little bit closer, see that the error message is a little bit different. Here it says refused by responses. So let's scroll up here and see a little bit more about what it's saying. So in the more verbose message here. It says request get. Sourceallies.com Without the eye. And available matches. Is source allies.com with the eye and it says URL does not match. So this is basically telling us that our response library has is expecting certain things to be used and we use something that wasn't one of those things. So that, that's pretty good feedback. That's a pretty good, pretty clear message. So I'm going to, I think I just ran the test, I'm going to fix this, put it all back. So we're passing again. Alright, perfect. That looks good. All right. Now let's look at another trade off here. So, so far so good. I like this approach. But let's say in our hypothetical application it's still a little unstable and we still need to swap between requests and URL live for whatever reason. And so if I end up needing to swap and I'm just going to swap these out. And I'm going to change this to. Code. So let's say I need to refactor back to this. And if I try to run this with responses. I get passing test and a failing test. And the passing test is our real test and the failing test is our mock test. And if we look at the output of our failure. We've got a cert mock text in and then we basically have the entire contents of the real web page. So essentially that tells me that our mock is not mocking. And that's because this is a specific specialized tool that is specific to requests. And since we're no longer using requests, this tool will no longer work. Unfortunately, depending on your application, the tools, and the libraries that you're using, switching from general purpose mocking to specific specialized. Blocking. May have some advantages for you. This has been Matt Morrison with technically speaking at Source Allies. Do you have a topic that you want to see covered? Comment down below. Check out Source allies.com to learn more about us. Thanks for watching and we'll see you next time.
This is great! Thank you for sharing, Matt! I've come across SO MANY tests in my career that were testing the implementation rather than the desired outcomes. It's good to see a simple example of doing mocks the right way 👏
Day 6 of my Build in Public journey 🚀
Here’s what I focused on today 👇
💻 Python
• Revised sets and learned about what a frozenset is
• Learned how arrays work in Python (array module, NumPy arrays)
• Read how list comprehensions work
🌐 HTML
• Learned how to use images in web pages
• Explored forms and how they work
• Started with web accessibility basics
#BuildInPublic#100DaysOfCode#LearningInPublic#Consistency
📌 Problem: Merge Strings Alternately
💡 Approach:
Used a simple two-pointer / iteration technique to merge both strings character by character.
First, iterate up to the minimum length of both strings and append alternately.
Then, append the remaining characters from the longer string.
⚙️ Key Insight:
Handle unequal string lengths separately
Avoid index out-of-bounds by iterating till min(len(word1), len(word2))
⏱️ Time Complexity: O(n + m)
📦 Space Complexity: O(n + m)
merg
📚 What I learned:
String manipulation efficiently
Handling edge cases when lengths differ
#LeetCode#DSA#Coding#ProblemSolving#Python#SoftwareDevelopment#CodingJourney
Have you asked your Claude Code to create a YT💩 video yet?
"Can you use whatever resources you like, and python, to generate a short ‘youtube poop’ video and render it using ffmpeg? Can you put more of a personal spin on it? It should express what it’s like to be a LLM."
Full writeup here: https://lnkd.in/eEuvqeBN
Day 19 of #100DaysOfPython
Today was about instances, state, and higher-order functions.
I built a Turtle Race Game using Python’s turtle module. I created multiple turtle instances, each with its own color and position, then made them race using random movement speeds until one wins.
This project helped me understand:
How different instances can be created from the same class
How each instance maintains its own state (position, movement, etc.)
How to use functions in a more flexible way when controlling behavior
It was a fun way to see objects come to life and interact on the screen.
Starting to think more in terms of objects and behavior, not just lines of code.
#100DaysOfCode#100DaysOfPython#Python#OOP#PythonProjects#TurtleGraphics#LearningToCode#CodingJourney#BuildInPublic
Day 33/100 – #100DaysOfCode 🚀
Solved LeetCode #1480 – Running Sum of 1d Array (Python).
Today I practiced prefix sum logic to compute the running sum of an array.
Approach:
1) Initialize an empty list to store the running sum.
2) Maintain a variable sum = 0.
3) Traverse the array and keep adding each element to sum.
4) Append the updated sum to the result list.
5) Return the final running sum array.
Time Complexity: O(n)
Space Complexity: O(n)
Understanding prefix sums helps solve many array problems efficiently 💪
#LeetCode#Python#DSA#Arrays#PrefixSum#ProblemSolving#100DaysOfCode
Week 1 of #100DaysOfCode — done. 🎉
This week wasn’t about writing complex code.
It was about building consistency.
This week I focused on Python fundamentals:
• Control flow (if/else, loops)
• Functions and scope
• Imports and modules
• Lists and tuples
I’ve structured my learning into notes and practical examples to better understand the concepts :
https://lnkd.in/epaBymnJ
Still early in the journey, but the foundation is starting to form.
Let’s keep going. 🚀
#100DaysOfCode#Python#LearningInPublic#CodingJourney
Longest Common Prefix: Column-Wise Early Exit Beats Pairwise Comparison
Comparing strings pairwise requires multiple passes. Column-wise iteration checks all strings at each character position simultaneously — first mismatch or string exhaustion returns accumulated prefix. Early termination saves processing remaining characters.
Early Exit Advantage: Best case (short prefix): O(m) where m = prefix length. Worst case: O(n × k) where k = min string length. Column-wise processing enables stopping the moment consensus breaks.
Time: O(n × m) | Space: O(m)
#StringAlgorithms#EarlyTermination#CommonPrefix#ColumnWiseProcessing#Python#AlgorithmOptimization#SoftwareEngineering
Starting a new routine: Daily Python problem-solving.
Day 1 was all about revisiting the fundamentals today. I worked through a few classic logic exercises: a FizzBuzz variant, some conditional logic for grade classification, and a few quick warmup drills.
It’s always good to go back to the basics and build a little consistency. Excited to see how far I can push this habit over the next few weeks!
You can check out my code for today over on GitHub here:
https://lnkd.in/gvvC4yRk#Python#SoftwareEngineering#DeveloperJourney#Day1
Day 6 of #100DaysOfCode 💻🔥
Today I worked on a fun problem — solving a maze using Python 🤖
At first, it looked confusing 😅
But then I learned a simple strategy:
👉 Always follow the right wall
This approach helped the robot find its way to the goal step by step.
What I learned today:
• Breaking problems into simple rules makes coding easier
• Logic matters more than complexity
• Small mistakes (like missing brackets 😅) can break everything
It’s amazing how a few lines of code can solve something that looks complicated!
Slowly building problem-solving skills day by day 🚀
#Python#CodingJourney#100DaysOfCode#LearningInPublic#BeginnerDeveloper
Some days, VS Code feels like a puzzle I didn’t sign up for.
Setting up a simple thing like Python with Conda base turned into hours of confusion.
Paths, terminals, environments… nothing talks to each other at first.
But I’m learning: frustration is part of the process, not a sign to quit.
Slowly, things start to make sense.
#techjourney#womenintech#datasciencejourney#vscode#python#conda#beginnertodev
Part one can be found here: https://www.garudax.id/posts/source-allies_testing-unittesting-python-activity-7430340432447959040-FbkC?utm_source=share&utm_medium=member_ios&rcm=ACoAAAGmwjYBHwpiaJJCWdCh_R0W3uAc84984S0