How To Create Lean Product Specs
At the core of the effectiveness of the Lean Startup methodology is the Build-Measure-Learn cycle. You start with a hypothesis about what product or feature might improve the business outcomes, and then build a simple version of it to test that hypothesis. You measure the results, and learn whether or not you were right. If you were right, great, now move on to your next hypothesis. If you didn’t move the needle, you dig into the analytics and talk to customers to try to learn why. BTW, you can often learn more from the failures if you take the time to study them.
This Build-Measure-Learn process requires four parts: 1) the hypothesis; 2) the building; 3) the measuring; and 4) the learning. The hypothesis is often overlooked. I’ve worked with a lot of CEOs who know, based on their vision, exactly what features they want built next. But having a specific hypothesis about the outcome of a feature is important for two reasons. First, it forces you to make deliberate estimates about how much you think this work is going to move the needle of the business and specifically which needles. Thinking through that gives you the means to prioritize it appropriately against all the other demands on the development resources. Secondly, it forces you to know where you are moving the needle from, meaning that you have to know the current state. If you are not measuring your current state and changes to it, then you risk careening from ignorance to ignorance. You are throwing features at the wall to see what sticks, without even having a way to measure stickiness. Measuring relevant improvements in business outcomes as a result of testing a hypothesis is what Eric Ries calls “validated learning.”
So how do you build this discipline into your product development process? One technique that has worked well for me is to insure that the specifications for development efforts are formatted to include:
- Description of the current state (and pain): What is the relevant measurable customer behavior that we are trying to improve? What specific problem are we attempting to solve, e.g. are we fixing a bug that causes a failure to transact or increased attrition, adding a feature to close a competitive gap, or changing a flow to improve poor engagement? What is the current state of the specific metric(s)?
- Hypothesis for improvement: What benefit do we expect to see from this work? This should be an estimate of a measurable result, e.g. increase conversion from step A to step B by 20%.
- The actual feature description: Different development teams have varying preferences on how this gets structured; e.g. either in the form of user stories, or feature specifications. Use whatever you use today.
- Analytics required: This is the description of what Google Analytics tags or other analytics markers need to be included. Many product managers have a pretty structured approach to how they set up their tags, and this can be specified directly in this section. Be sure to include the specific analytics goals, i.e. what needs to be measured.
In addition to the benefits mentioned above, providing the information on the current state and hypothesis for improvement provides critical context for the engineers. Engineers can make better implementation decisions when they understand the goals of their work rather than just the specification. Including the analytics infrastructure as part of the feature spec makes it a part of the work planning and engineering discipline rather than an afterthought.
If the agile/scrum development process is fully followed, there is a Sprint Review meeting in which the development team has the opportunity to show the newly completed work to the stakeholders. There is also a Sprint Retrospective where the development sprint is reviewed among the engineers and scrum master to see what improvements can be made to the process itself. If you are following a regular cadence on your product management planning cycle you can introduce an analogous process. After you have gathered statistically significant data from the new implementation, make sure that data gets reviewed (along with your original hypothesis) with your product team, the engineering team, and any business stakeholders. Was your hypothesis correct? What did you learn? It may also be worth the product team having a retrospective meeting to reflect on the new validated learning and its implications for the product backlog and the next B-L-M cycle.
In my experience, engineers appreciate understanding the context and goals of their work, and they get great satisfaction from seeing the quantified improvements to the business metrics that result from their efforts. Once they get a taste for seeing measurable improvements in outcomes, they embrace embedding solid analytics tools and tags in their code.
In a fast moving technology driven organization, product managers are often overworked as they translate the needs and priorities of many stakeholders into clear product specifications full of properly structured user stories for their development teams. Using a hypothesis to clarify the intended benefit, and embracing the analytics both before and after the work is done, will help to insure that the organization is prioritizing the right work while making steady forward progress through validated learning.
I hope you find this helpful.
Pete Baltaxe is a consultant, product leader, and startup advisor who blogs at www.FuelandWheels.com.