The Truth Curve

The most powerful tool I know for determining the appropriate amount of effort to spend on an experiment

Experimentation has always been a challenge for teams. Even teams who have experience writing hypotheses and designing experiments struggle to spend the “right” amount of effort on their learning. Spend too little and you risk wasting a cycle and not learning much. Spend too much and you may miss your learning window or end up investing so much in the idea that course correction becomes a near impossibility.

One of the tools that I’ve been using and teaching for years to help determine how to spend an appropriate amount of effort on an experiment is The Truth Curve. Originally framed by Giff Constable in his book, Talking to Humans, the Truth Curve is a simple visualization designed to force the conversation to reconcile “how much do we know?” and “what’s the next step?”. Since Giff first published his book, The Truth Curve has taken on a life of its own as it has been used, interpreted and reshared by several other folks in the product management and agile communities.

In this month’s newsletter I want to take a look at The Truth Curve to show how it helps teams determine when and where to spend as well as when and where to stop spending. I also want to give you two prompts to help you use it effectively at your next iteration planning meeting.

First and foremost, behold The Truth Curve:

The Truth Curve contains two axes. The Y axis represents the amount of evidence you have to justify working on your current hypothesis. The higher up you go, the more confident you are that the hypothesis is valid. The lower you go on the Y axis, the less confidence the team has that this hypothesis will work as originally conceived.

The X axis represents an increasing level of effort and scope dedicated to the experiment you’re currently running to test the same hypothesis. The further to the right you are on the X axis, the more time, effort and money is being spent on the idea.

The green curved arrow (aka The Truth Curve) represents the path teams should follow as they test their hypotheses, collect evidence and learn where it’s valid and where a course correction is needed. If your work falls above the green line (lots of evidence, lower investment) your team is at risk of analysis paralysis. You’re testing and learning but not responding to what you’re learning in an aggressive enough fashion.

If your work falls below the green line (lack of sufficient evidence, heavy investment) your team is taking unnecessary risks by building out a solution it doesn’t yet have the evidence to justify.

If you’re starting out on a brand new hypothesis without any significant previous evidence you are in the pink park of the diagram, aka The Land of Wishful Thinking. You’re hopeful that your hypothesis is right but the lack of evidence from the market places you on the bottom left corner of the curve. Here, your goal is to learn as cheaply and quickly as possible if there’s a “there there” in your hypothesis. The typical experiments you run at this point will be customer interviews, surveys, or paper prototypes. As you begin to collect more positive evidence from lightweight experiments, your level of confidence goes up. With that increased confidence comes the justification for greater effort on the next experiment. It might end up being a clickable prototype or a Wizard of Oz experiment.

Each time you run an experiment and get positive feedback, the team asks the same two questions:

  1. What’s the most important thing we need to learn next?

  2. What’s the least amount of work we need to do to learn it?

At some point and with enough positive feedback from the market the only way to learn the next most important thing is to build shippable software. However, this is also the most expensive and riskiest way to learn. We should be sufficiently high on the Y axis before committing to writing code.

As you learn, the questions change

One fascinating aspect of the Truth Curve is that as you move up and to the right, the questions you’re asking about your hypothesis change. In the early (cheap/low effort/low confidence) days of your idea you’re trying to find problem/solution fit. In other words, does the thing we’re building solve a real problem for a real customer in a meaningful way.

If the evidence comes back positive and you continue to scale up and to the right, the team starts to look for product/market fit. At this stage you’re testing whether or not this is a sustainable business for you to be in and is it worth it for you to scale this functionality broadly.

The question the team is working on shifts from “Should we build it?” to “Can we build a sustainable business with this idea?” (or can we grow our existing business in a sustainable way?).

If the evidence is negative, stop and correct your course

One thing to keep in mind as you start to use this model for determining your learning efforts. If, at any point, you start to get negative feedback about your hypothesis. The data you’re collecting points to flaws in your assumptions or a change in your target audience’s needs or behaviors, you have to stop. You have to reassess whether this is still an idea worth pursuing and, if so, how you’ll pivot to keep it relevant in the new market conditions you’ve uncovered. In many cases, this may be a signal to kill this idea and move on to the next hypothesis in your backlog.

Don’t ignore the data you get from your experiments. Map them to the Truth Curve and only then decide if and how you’ll move forward with your next learning effort.

What I've been up to

Another month, another translation of Sense & Respond goes live. This time the book has been translated to Brazilian Portuguese and can be found for purchase at this link.

I am now on vacation for the month of August. It's taking a few days to wind down from the regular checking of the email and chats but I'm getting used to it :-)

This fall I'm continuing to work exclusively on client delivery with several large clients focused on OKR training and coaching as well as Lean UX and Lean Product Management training. If you'd like to work together I'm booking keynotes for late Q4 2022 and Q1 2023 now. For teaching and workshops, we're likely looking at Q1 2023 at the earliest.

Watch, Listen, Read

Watch: Westworld -- Back for Season 4 this near-future, sci-fi series on robots, artificial intelligence and what it means to be "alive" keeps me hooked. This season gets even more mind-blowing as the line between human and robot is blurred beyond recognition. (By the way, the opening sequence and music for this show is just gorgeous. I never skip it.)

Listen: Jeff Gothelf on podcasts -- This month's listening recommendation is not music (shock! horror! I know....) Instead, I've put together a playlist on Spotify of all the podcasts I've been on in recent years. Turns out, there are a LOT of them. But, if you're looking for thoughts on everything from lean ux to agile to staying forever employable, this playlist has it all.

Read: Genghis Kahn and The Making of the Modern World -- My book club is reading this book this month. It's absolutely fascinating. In US education we learn so little about Genghis Kahn that most of us believe he was a brutal general from long ago and that's about it. There is truth to that of course but his influence was so much broader than just battlefield brutality. His legacy covers meritocracy, religious tolerance, and an influence that spans the majority of the northern hemisphere. The book is based on the author's lifetime of work in Mongolia and the recently released and translated secret history texts long-hidden to the outside world.

If you're interested in working together please reach out. If you're not sure what I do, here's a description.

Reply

or to participate.