Do you ask “Why?” when developing Machine Learning?

Do you ask “Why?” when developing Machine Learning?

The Importance of Starting with "Why" and Iterative Quantifying of Value Generation in Developing Machine Learning Solutions

Last week I shared a post about my way of understanding the various stages of Machine Learning development: Squiggle to Scribble to Straight (you can read that here). In it, I discussed how Lean methodology and Design Thinking can be tools to support you in achieving value efficiently through the development process. This week, I’d like to zoom in on one aspect of the Build-Measure-Learn loop: Measure. But how do we measure value in a hypothetical way? I hope I can help describe what works for me.

When developing Machine Learning solutions, it is important to start with a clear understanding of why the solution is needed and how it will generate value. This can help ensure that the solution is aligned with the organization's goals and that it will be successful in delivering value. In this blog post, we will explore why starting with "why" is important and how to quantify value generation in developing Machine Learning solutions.

Starting with "Why"

Starting with "why" is important because it helps ensure that the Machine Learning solution is aligned with the organization's goals. This means that the solution is more likely to be successful in delivering value and achieving the desired outcomes. When starting with "why," it is important to ask questions such as:

  • What problem are we trying to solve?
  • How will this solution help us achieve our goals?
  • How can we implement the solution?

By answering these questions upfront, organizations can ensure that they are investing in the right machine learning solution and that it will be successful in delivering value.

Iterative Quantifying of Value Generation

Once you have a clear understanding of why the Machine Learning solution is needed, it is important to iteratively quantify the value generation. This means that you should continuously measure and evaluate the impact of the solution on the organization's goals. By doing so, you can ensure that the solution is delivering the expected value and adjust it as needed.

To iteratively quantify value generation, you should define clear metrics that measure the impact of the solution on the organization's goals. These metrics should be tracked regularly and used to guide decision-making. To note: the project doesn’t have to be delivering REAL value; the value can be hypothetical, as long as the metric indicates the value can be realised eventually. If the metrics indicate that the solution is not progressing towards the expected value, then adjustments should be made to the solution to improve its impact, or resources should be redirected elsewhere.

Value Generation is part of a projects iterative "Why"

It is important to note that starting with "why" and iteratively quantifying value generation are not mutually exclusive. In fact, they are complementary approaches that can help ensure that the Machine Learning solution is successful in delivering value. Iteratively quantifying value generation in the form of continually asking ourselves “Why” can help ensure that the solution is delivering the expected value.

A method I’ve utilised to great success is the 4 weekly “Why” ceremony. These are ceremonies where our Data Science team comes together to demonstrate the value attained (either expected or realised value) in the previous 4 weeks, and what another 4 weeks of investment could feasibly return in value again. New projects (I prefer “products”, but that is a discussion for another time) can also participate in the evaluation process by identifying what kind of minimal deliverable we could measure expected value return from. I’ve posted in the past about how a Design Sprint can make this estimate more accurate and can bootstrap your build by pruning bad product ideas early, and may post an article sometime soon about what this looks like.

Now, you can dive in and build something which aims to achieve that hypothetical value. Maybe that looks like a first pass model. Maybe it’s data cleaned and in the right location. Make it achievable, but also clearly aligned to the strategic value your product hopes to attain.

At 4 weeks, we now reconvene with a team of our peers to assess as a group if the value target has been reached and if there is a realistic path to achieve a subsequent 4 weeks of value in the next sprint.

Repeat!

Effect on Scribble-Squiggle-Straight

The effect on our “Scribble-Squiggle-Straight” method of generating value out of ML development is that we are held accountable to the value we aim to achieve. It gives us clear gateways to Persevere, Pivot or Fail our product development.

WARNING: A natural initial expectation from Data Scientists is that the intent of these forums is for others to “police” their work. In my experience, this could not be further from the truth. What I’ve seen is that Data Scientists come to honestly appraise their own work; it’s never their peers who make the decision that a project is not realising value, it’s the Data Scientist themselves. And there are two great reasons Data Scientists love realising their project is not progressing to value:

  1. Data Scientists hate it when their work sits on a shelf not getting used.
  2. Data Scientists love a new challenge.

So if you are finding that your Machine Learning projects are either struggling to achieve value, or are spending far too much time spinning wheels before being failed, perhaps the “Why Sprint” is something your team could use.

Shout out to previous colleagues Olivia Sackett , Jason Leong and the rest of the brilliant Insight Labs/Innovation/Incubator team at NBNco for providing the inspiration and base to form this framework.

To view or add a comment, sign in

More articles by Nathan Damm

Others also viewed

Explore content categories