![]() |
Photo by Austin Distel on Unsplash |
Off topic pre-amble
Getting back on topic of the book…
At a minimum, weekly touchpoints with customers, By the team building the product, Where they conduct small research activities, In pursuit of a desired outcome.
Start small and iterate (in ways of working)
I knew that I could impact how I did my own work. I didn't worry about what other people were doing. I didn't try to change the way these companies worked. I simply did my work my way, and I got results. [Chapter 14]
- Product team collaboration - start by finding your product trio willing to partner with you and consult them on key decisions and iterate from there
- Talking to customers - if you are not able to talk to your own customers, talk to someone who is similar to your customer and ask them for an introduction for another person to talk to
- Working backwards - where your team is assigned a solution to deliver (as opposed to outcomes to solve) work backwards by asking the two questions:
- “If our customers had this solution, what would it do for them”
- “If we shipped this feature, what value would it create for our business” (refine the answer until there is a clear metric, which is the desired outcome)
Identifying hidden assumptions
I often generate 20-30 assumptions for even a simple idea. (Identifying as many gotchas a you can increase the chance that you generate the riskiest ones) [Chapter 9]
Running assumption tests
As we test assumptions, we want to start small and iterate our way to bigger, more reliable, more sound tests, only after each previous round provides an indicator that continuing to invest is worth our effort. We stop testing when we’ve removed enough risk and/or the effort to run the next test is so great that it makes more sense to simply build the idea. [Chapter 10]
There are two tools that should be in every product team's toolbox- unmoderated user testing and one-question surveys. [Chapter 10]
Chapter 10 of the book talks through running assumption tests once your product “trio” has identified and more importantly prioritised key assumptions to test. This highlights the value in bringing the iterative mindset to discovery as well, and as a result of that the tools that can be used to test assumptions.
Prior to reading this book, one of the tools I’ve read/heard about for testing assumptions about demand is the fake door test (or a similar test called landing page test) where we show users an option that has not yet been built in order to measure interest. However this will usually still require some form of development to be done.
The book talks through two ways to more rapidly test assumptions to get some early signals.
- Unmoderated testing - unmoderated user testing services allows you to post a prototype and define tasks/questions for participants to complete without the presence of a moderator. Whilst normally intended for usability testing, this could be used to test assumptions in a similar way one might do with a fake door test but without requiring development.
- One question surveys - Similarly one question surveys that pop up in product could be used to test some assumptions (however it is important to ask about past behaviour and specific examples and not what they might do in the future as this could result in unreliable data)
Continuous interviewing
The hardest part about continuous interviews is finding people to talk to. In order to make continuous interviewing sustainable, we need to automate the recruiting process. Your goal is to wake up Monday morning with a weekly interview scheduled without you having to do anything. [Chapter 5]
Whilst there are some great user research tips in chapter (I’ve got a book called “The Mom Test” that delves a lot more on this topic near the top of my list which I hope to write a summary about in the near future), perhaps the most interesting and unique part is the tips on addressing one of the frictions/pain points of finding customers to interview, and “automating” this process such that talking to customers regularly becomes easy.
One of the tools suggested is to recruit them whilst they are using your product by way of a one question survey “Do you have 20 minutes to talk with us about your experience…”, and potentially in combination with a scheduling software to reduce the back and forth needed for them to select an appropriate time.
Prioritising opportunities
You might be tempted to score each opportunity based on the different factors (e.g., 2 out of 3 for sizing, 1 out of 3 for market factors, and so on) and then stack-rank your opportunities, much like you might do with features. Don’t do this. This is a messy, subjective decision, and you want to keep it that way.
Perhaps my most interesting take away from this chapter was the advice not to “score and stack rank” the opportunities especially as many prioritisation frameworks I’ve seen so far does this. However the rationale does make sense in this instance as doing so will lead us to believe that there is one right answer (the highest score).
The recommendation instead is for the team to have a healthy debate, considering different dimensions and make the best decision at that point in time. In relation to decision making it also refers to the level 1 vs. level 2 decision concept from Jeff Bezos in which he argues that we should be slow and cautious when making decisions that are hard to reverse (level 1) but move fast and not wait for perfect data when making decisions that are easy to reverse (level 2).
Closing
* As an Amazon Associate I earn from qualifying purchases. This page contains affiliate links, meaning I get a commission if you decide to make a purchase through my links, at no cost to you.
Comments
Post a Comment