Testing Code: The Art of Predicting the Unpredictable

Testing Code: The Art of Predicting the Unpredictable


I recently read the book “How to Make Money in the Stock Market.” Despite its catchy title, I was drawn in by the high number of copies sold. While I found a few methods that resonated with me, most of the book's content didn't meet my criteria. My main issue with the book is that the author attempts to predict the unpredictable. He uses numerous charts to support his claims of pattern detection, mimicking a machine learning model that classifies trends as good or bad. However, we all know that any ML model has its margin of error. Thus, trying to persuade readers with so-called proven anomalies won’t convince the astute reader, especially those who take calculated risks. This approach is akin to writing numerous tests to prove the correctness of software.

When it comes to taking risks, a software engineer typically aims to avoid them. This is achieved by testing our code, which involves writing code that tests other code. Some paradigms even suggest writing tests before the actual code, treating it as part of the design process. The tests we create simulate the flows that the software is expected to handle (or fail to handle). Assuming an average module is 100 lines of code, with 50% containing logic that might fail, our tests must cover all flows involving any subset of those 50 lines. This could result in more than 50 tests. While covering these flows is essential, is there a way to reduce the number of tests needed?

Every engineer should have encountered many written tests that don't cover every flow, including critical ones. This can happen for several reasons. Here are some examples:

  1. This happens because the code is structured in a way that makes comprehensive testing difficult, and refactoring it is too risky. I prefer to refactor the code to enable coverage of all business logic flows. This scenario is one reason some engineers adopt Test Driven Development (TDD). However, I don't find this approach foolproof; if a flow is missed or added later, the design might need to change. In my opinion, building software based on tests is a bad idea as it keeps the components too volatile. I prefer to build software on a solid foundation of thoughtful design and simplicity.
  2. Depth First Tested Flows - The idea here is that the developer selects a main flow and attempts to cover all the edge cases associated with it. However, the tests can quickly become bloated, making it easy to overlook other critical flows. This challenge is compounded by the difficulty in remembering which flows were covered in a codebase that spans 2000 lines.
  3. Splitted ownership: When two developers collaborate on a single component, each independently implements and tests their portion. In such scenarios, it's crucial for the boundaries between their responsibilities to be clear. Otherwise, the division of the component would serve little purpose. In this situation, each developer might not fully grasp the flows they need to handle, assuming certain tests fall under the other developer's responsibility. In such cases, my preferred solution is to avoid splitting tasks meant for a single person. It's more effective to assign top-notch engineers to tackle complex challenges, akin to having skilled individuals handle formidable tasks.

In my view, simplifying code can decrease the number of tests needed. Even before reaching the testing phase, my approach involves meticulously reviewing and analyzing the code, actively seeking out bugs and potential issues. This analysis demands deep thought, surpassing the mental effort required for simply writing and completing tests. My method begins with a comprehensive review and detailed analysis of the code.

During the code review, my first step is to pinpoint areas that could be improved, such as parameter names, syntax adjustments, or minor lint issues. I maintain this review process until I reach a 15min timeout without being able to find anything. Then I move on to the second phase. In this phase, I apply a specific set of rules to ensure the code complies with the team’s conventions and styling guidelines. I continue refining the code until it achieves a polished, artful quality. After this, I proceed to the code analysis phase.

The approach to code analysis mirrors the review phase, where I initially assume that there may be hidden flaws. I begin by examining the code at a high level, focusing on classes, methods, and their inputs and outputs. Gradually, I delve deeper, scrutinizing each line of code as potentially flawed. At this granular level, applying the "5 Whys" technique can be useful, starting with questions like, "Why did I use await here?"

Whenever a flaw is identified, the process is restarted from the beginning. After completing these two phases, the code is simplified, and critical issues are revealed, making tests easier and more straightforward.

To view or add a comment, sign in

More articles by Hisham Yasin

Others also viewed

Explore content categories