Triggers & Pivots
Introducing Tom Kerwin
I hope that all is well with you and yours.
By now, one would have thought that there would be hope in the
cardboard plastic mansion; the walls in the living room and staircase will be painted come Sunday, as will the respective ceilings. However, no sooner did we exhale than realize that before then, we would also have to clear out anything left in the living room, and put in the new floor.
While the goal has long been “to get everything the way we want it”, the road there is entirely filled with compromise, delay, change of heart, unanticipated events, and things in reality turning out to be different to the images in our heads. A suitable analogy for strategy, clearly.
As promised last week, we are today going to dig into a very practical aspect of innovation, namely how one might identify points in time where key decisions have to be made; moments where projects are either scaled or abandoned.
There are few better to answer the question than Tom Kerwin, the creator of Pivot Triggers. So here are his thoughts on the matter, plus a few on some others, in an extensive interview with plenty of practical guidance.
For those who do not (yet) know you, could you give us a short summary of who Tom Kerwin might be?
First of all, thanks for inviting me to talk with you. Yours is the one newsletter I put everything else down to read each week, and it's always a delight to chat with you.
As for me, I'm a design and product leader who loves working with complexity. For over two decades, I've been preoccupied with figuring out how teams can do a better job of evolving their products and services to meet customers' needs. This started when I fell in love with design, user research, and marketing strategy while studying engineering at Cambridge. Since then, I co-founded a web agency that spun out two of the UK's biggest lead generation startups, worked with A/B testers like Conversion Rate Experts, then with startups and product innovators including Streetlife, Zopa, Qubit, Apploi, and more recently Relive.
Your work is what I would call complexity-coherent, that is, you have a deep understanding of complex systems and it shows. What is the most common way that complexity manifests in a broader corporate setting, from your experience?
That's very kind of you to say, as complexity-coherence has been my aim over the past eight years.
[Complexity manifestation] varies quite a bit, but in broader corporate settings, one of the most common patterns I've seen is that people are insulated from external complexity and have become adapted to internal complexity.
When you dig back in corporate history, you always find messy, ambiguous, serendipitous beginnings to every product or service. But this reality has usually been lost in the annals of corporate forgetfulness. At this point, the organization – itself a complex adaptive system – has evolved for careful, planned optimization. It's got a strong disposition to keep doing what's worked for it before, well, at least until that catastrophically stops working and you have a crisis.
The challenge comes when someone decides they want to create a new product or service – to provoke and then stabilize novel behaviors to form a new revenue stream. This is when you start to hear the language of innovation, disruption, and experimentation. People may genuinely want these, but the organizational disposition is process, order, efficiency, and predictability. Everything's set up for long, slow, deliberate planning cycles, and it's basically impossible to go against that.
What makes things even harder is, as I mentioned, that many employees become insulated from any complexity outside of the organization; they have become brilliantly adapted to the thrilling complexities of the corporate language games inside. This creates a strong desire for what Erika Hall rather pithily called "the illusion of deterministic innovation". And from this desire springs the never-ending hunt for that next magic bullet. A few of the usual suspects at the moment are "Design Thinking", "Agile Mindset" and "Lean Startup MVP".
In practice, there tends to be a difference between how one handles complexity on an executive level and how one handles it on the proverbial ground. Do you have any good examples of how this is handled?
If you could have your software transcribe that noise for the interview, that'd be great.
So, something I've seen across many organizations is what I call the “vision chasm”. The leadership believe they need to set a vision and mission, to define what they'd like things to be in three years; usually, this is "all the complaints and restrictions that are annoying us today have magically gone away". Meanwhile, all the people who are doing the work have a clear picture of what they're doing for the next three weeks, but no idea how anything they're doing relates to the magical vision.
And, crucially, there's rarely anything in-between. That's the vision chasm. People try to fill this chasm with planning, like road mapping. A narrative around social beliefs tends to become the currency that gets work onto said roadmap – which needs a sort of shutting out of reality in order to protect these beliefs long enough to get something done.
In a manner of speaking, products can be similar to complex systems in the sense that both are different from the sum of their constituent parts. Yet feature thinking – i.e., the notion that one can break products down to a number of features – appears to be on the rise, particularly in Southern California. As someone who understands complexity and product development alike, what are your thoughts on this?
Well, you won't be surprised to find that I think features are a poor tool for thinking about product, but they're here and they seem to have stuck. Debating this with colleagues, it seems that it's mainly because features feel tangible and under our control. Compare that with customers and their behaviors – or the world outside the organization in general. Anyone who's tried has probably found that getting clarity on an organization’s desired goals and outcomes is very hard.
I went through the whole outcomes over outputs thing, but after many frustrating conversations, I eventually accepted that enforcing singular outcomes is incoherent. All people constantly have multiple goals in mind – a mixture of the explicit and implicit, the collective and individual. Some are about gain; many are about managing perceived risk. I touched on some of this in Fear-Driven Development.
And as Dave Snowden has pointed out, we scan a situation and do a first-fit pattern-match. Given what we perceive (which is very limited), we come up with a solution that feels like it should have positive outcomes across several goals without an unacceptable increase in our exposure to perceived risks. This solution can be conveniently labelled a "feature".
This is called “solutionizing”, and I used to try to fight it. These days, I recommend working with solutionizers – stand with them instead of against them and play out their solution to expose the messy set of hopes and fears that gave rise to it.
Where features become a problem is when we treat them as if they're a real thing instead of an artificial construct to help our conversations. The approach turns additive; if we can just put all the desirable features into the product bucket, then we'll win! This creates the MVP death spiral:
Pick a set of imaginary features that you're confident will make for a successful product.
Race to build those – got to get the MVP "done".
Realize it's taking too long.
As the product takes shape, start to see gaps and missing features.
Now close to selling it, our confidence is dropping. Early sales conversations are telling you that there is no product-market fit; it must be because there are missing features.
Have many conversations about which features to prioritize.
Return to 1.
I think Teresa Torres put it well: "You are not one feature away from success and you never will be." But all of this stems from believing that a product is a thing made up of features, and that building a product is about bolting the right set of features together.
Alright, we promised we would discuss Pivot Triggers. A pivot in strategy is usually a sign of someone, shall we say, not being masters of their craft; you move from relative certainty to relative uncertainty, from a space where you know what you are doing to one in which you do not – but your new competitors do. I would imagine that you have a different definition.
Yes. I use it as a bit of a catch-all for a range of decisions we might make at the point where we have more information. It boils down to three choices: grit, quit, or iterate.
Setting a Pivot Trigger simply commits us to looking at the world again and reconsidering our options when we have some more information. For example, you might say:
“We'll pivot if fewer than three customers agree to talk with us about this problem when we send ten emails."
"We'll pivot if we see less than a 3% conversion rate to the waiting list when we promote LinkedIn ads that go to a landing page."
"We'll pivot if more than 2 out of 100 customers have blocking problems when we onboard them via the new process.”
“We'll pivot if we find that more than 25% of the data needs significant clean up when we grab 100 samples from the API."
None of these by themselves might be enough to make you quit or commit, but they give you a reason to go and learn more, and a diary date to check in with what you've learned. In practice, teams typically iterate several times before getting to a quit decision.
Let's take a simple example. Imagine a team that is aware of the risk that customers won't want their new idea. They've set a Pivot Trigger probing the world by sending an email to a handful of customers inviting them to discuss the problem they want to solve. It's common that the response to the first batch of emails is lower than anyone had hoped, so one might try changing the messaging before killing the idea. This will force the team to reframe the way they're talking about the problem or about the idea in that email and try again.
To reuse your example, say that you take the number of customers that need to confirm that the problem is real. Do the numbers you are looking for need to go up in relative terms when you scale, i.e., the more time and effort we devote, the higher the thresholds?
Yes, that's a way that you can do it. The important bit is that you start by doing things small, cheap and by hand – you're looking to get small signals back. When three customers speak with you, or whatever it may be, you invest a bit more in slightly bigger, slightly more expensive bets. By setting the first pivot triggers and doing the work to get the probes out, you've accidentally learned quite a lot more about what this thing is that you're working on, and how it might fit into the world. You've moved away from feature thinking too – you're now considering the value this thing delivers, the information that really goes back and forth, and the behaviors of real people in the world.
Your objectives and perceived risks for the initiative have changed; sometimes they've gotten more sophisticated; sometimes you've seen new risks or new possibilities. You can now invest more in bigger bets, as you're more informed.
So, in the case that we might have a reader who would want to try this out in practice, how would they actually go about doing it?
There are four core “chunks” that they should use.
1. Time Machine – a 40 minute team exercise for building a shared understanding of our strategy, hopes and fears. Most of our work is driven by subconsciously minimizing our personal exposure to risk. This is where you start embracing that and figure out the most effective sequence for the work instead of the most comfortable.
Take a trip to the great future
Set the scene: "We get in a time machine. When the door opens, it's <date> (whenever you'd see the results of your initiative) and it's gone better than we ever dreamed! What's it like?"
Ask people to write down what it's like on stickies.
Together, cluster similarly themed descriptions and add a green sticky to summarize each cluster.
Take a trip to the awful future
"Back in the time machine! Now when we emerge, it's <the same date> but in a parallel universe. Everything went so badly we wish we'd never started. What went wrong?"
Ask people to write down what went wrong on stickies. While people are writing, remind them to visualize being in the future looking back at the past.
Cluster these, and add orange stickies to summarize.
Line up matching "great" and "awful" stickies and fill in any gaps where there's no matching pair.
Take a silent vote on the scariest awful orange sticky. The cluster you're most afraid of tells you the signals you most need to probe for. Notice the different fears people have.
For the scariest cluster, which Signals can tell you whether you're heading for the great or awful future? If the cluster is about what customers might do, look for "Want It Enough" Signals or "Get It" Signals. If the cluster is more about what your team can achieve, look for "Worth It For Us" Signals.
2. Multiverse Map – a 10-30 minute exercise for detailing the behaviors that you need and those that you fear. Your success always depends on the behaviors of people and systems that are outside your control. Get specific about what those are and you'll be much smarter about metrics, research and experimentation.
Write down the outcome you're hoping to get from your next chunk of work. This frames the sequence of behaviors that you need to see if that chunk of work is going to succeed (e.g., people sign up through our email campaign).
Start with the trigger. What kicks off the sequence of behaviors for one person out there? Write that down on a yellow sticky (e.g., we send an email).
Moving right, add a green sticky and an orange one below it. On the green sticky, write what a specific person does in the best world (e.g., they see our email). On the orange sticky, write what a specific person does in the worst world (e.g., they don't see our email).If a given step has several possibilities for the "best" or "worst" worlds, stack up as many stickies as you need.
Keep moving to the right adding green and orange stickies to tell the whole story.
Join the stickies up with arrows. Add reasonable guesses at the percentage of people who'll end up in each world for each step. Use line thickness to visualize paths.
Do some back-of-the-napkin maths to estimate the potential outcome at the end of the sequence (e.g., out of 1,000 people who get the initial email, 10 end up placing a pre-order).
3. Probes & Pivot Triggers – the time required changes depending on where you're at; it can be a 15 minute check or a multi-day initiative, but it's always part of the work you were going to need to do anyway. This is where you go out and poke the world to see what behaviors happen. Ideally, you should establish your expectations before you start so that you can be honest about the surprises. Either way, whether you're using single threshold or multiple threshold triggers, you need to agree up front: what signals do you need to see today to feel confident enough to invest more tomorrow? Pivot Triggers are your lines in the sand today to help you answer the hardest question tomorrow: quit, grit or iterate?
Use Time Machine to orient and Multiverse Map to clarify the behaviors you need to collect signals for. Next, design a Behavioral Probe to provoke the signals.
"Want it enough" Triggers: "We'll pivot if fewer than 1 out of 50 prospects respond to our outreach this month"; "We'll pivot if our advert gets less than a 2% click through rate”.
"Worth it for us" Triggers: "We'll pivot if over 70% of our data sample today needs clean-up before we can use it"; "We'll pivot if more than two partners cause major set-up issues this week".
Single-threshold Pivot Trigger: Set a threshold for the signals you're collecting. Instead of a distant, ambitious target, set a low bar today. What's the weakest signal that's enough?
Multiple threshold Pivot Triggers: Set ranges for how you'll feel at different levels of signal: amazing, good, fine, concerned, critical, and so on. For example:
< 1 pre-sale out of 50 offers --> do something else.
< 6 pre-sales out of 50 offers --> revisit the offer terms.
> 20 pre-sales out of 50 offers --> get everyone on it.
Estimate thresholds with back-of-the-napkin maths. To make $10,000 by selling a $10 product, you need 1,000 sales. If your audience is 5,000 people, you need a 20% conversion rate. Is that reasonable? What if your audience is 2,000 people? 800 people?
Do note that this is obviously not statistically significant. No early signals can give you enough confidence to bet the farm – just enough to keep going and getting the next signals.
4. Anatomy of an Insight – this is usually anything from 30 to 60 minutes, depending on how complicated things are. The session is designed for making sense of the confusing signals the world provides. Specifically, how do you them into insights that change the available options?
Write down Signals you've observed. Signals are often metrics, but also observations of users and even your own experiences. You get these from probes, from analytics tools, from your senses and from observation.
The important part here is to separate Signals from Stories. Signals describe what you actually saw, heard or felt. Stories are your assumptions about what a Signal means. For example:
Signal: "Less than 1% of visitors clicked the buy button".
First Story: "We failed! Nobody wants our idea!".
Come up with at least three Stories for each Signal. Don't get trapped in one Story. Open yourself to other possibilities. This is the part that most people skip – you'll notice when you hear "metric said X therefore we must Y". That's why this is the layer that's going to change your experience the most. We want the metric to have a single meaning. We want the world to be simple.
Add Options that you're able to try next:
Kill this idea and move to a different one.
Do a "Get It" Test to clarify the idea and the button.
Test some different adverts.
Having reviewed all your Signals, Stories and Options, how do you feel? Do you feel more or less confident in your idea? Write down your new plan: do you double down, adapt it, repurpose it, kill it?
That is impressively extensive. Thank you very much for sharing.
My pleasure. You can find details in my course; a cohort-based, just over a week long affair, all about hands-on practice. But if you want to start immediately, I recommend you approach the tools backwards. Start by making sense of the signals you've already got. Then decide on some probes and set some pivot triggers. If you're struggling to do that, go to Multiverse Mapping to help you. If you don't know where to start with that, Time Machine is your friend.
You can, of course, also reach out; I am available via Twitter and Substack. But the course is obviously the way to go.
And there we have it. A massive thank you goes to Tom for generously sharing his work.
Next week, we are moving away from innovation to a rather different newsletter. Stay tuned, and have the loveliest of weekends.
Onwards and upwards,
This newsletter continues below with an additional market analysis exclusive to premium subscribers. To unlock it, an e-book, and a number of lovely perks, merely click the button. If you would rather try the free version first, click here instead.