Friends,
I hope that all is well with you and yours.
As promised, today will mark not only the beginning of our look at Philip Tetlock’s superforecasting theory, but also a new editorial direction (of sorts) for Strategy in Praxis. Given that I would like to explain my reasoning – I am placing a fair few of my eggs in an admittedly rather weak basket, after all – what follows will thus be a mishmash of explainers and actual content. Stay with me. Next week’s newsletter, when all the new wrinkles will be in place, will be significantly better structured.
New and Improved
Adding value while enhancing the old
In the world of marketing promises, “new and improved” has long been one of the great pleonasms. Strictly speaking, unless one’s definitions are generous to the extreme, a product cannot be both; a thing is either one or the other.
However, I have attempted to create an exception to the rule. Going forward, these newsletters will be differently formatted to enable a quicker overview and easier digestion. Each will begin with a word from yours truly, as they always have, followed by additional content in the form of:
links to the most important or noteworthy (relevant) news stories of the previous week, including short explanations about why they matter. Examples may be:
Trump back on Meta platforms: Zuckerberg inevitably loses out in a no-win situation by unsuspending the former president’s accounts, ensuring further calamity ahead.
Tech giants flag for Q1 decline of demand: Microsoft and Texas Instruments’ earnings calls both warn of higher than normal seasonal decline in early 2023, forcing investors to diagnose once more the already struggling sector’s health.
a link to the best piece that I have read that week (though it may have been posted earlier) and why. Example may be:
Many strategies fail because they are not actually strategies. In this 2017 HBR piece that rings as true today, Freek Vermeulen addresses many of the common problems that still plague companies the world over. Rather than follow strategies that enable choices, they become governed by objectives and benchmarks in a pure top-bottom process. Though Vermeulen echoes many of the points that I habitually make (here strategic intent, boundaries, bottom-up experimentation), he does so with clarity for an informative and easy-to-follow read:
Stanford professor Robert Burgelman said, “Successful firms are characterized by maintaining bottom-up internal experimentation and selection processes while simultaneously maintaining top-driven strategic intent.” This is quite a mouthful, but what Burgelman meant is that you indeed need a clear, top-down strategic direction (such as Hornby’s set of choices). But this will only be effective if, at the same time, you enable your employees to create bottom-up initiatives that fall within the boundaries set by that strategic intent.
My aim with these changes is to add value and improve the existing core. The meat on the proverbial bone will still consist of a discussion on a particular topic, in turn on a particular theme, as it has been for the longest time.
And on that note, since these explanations have already ensured that we will be running long, let us now move into the topic of the week.
An elitist promise with obvious strategic implications
Superforecasting improves accuracy, yet still misses the mark
As recent pandemics and recessions have proven only too well, modern forecasting often appears no superior to blindfolded chimpanzees throwing darts at a board; in previous editions, we have delved into why. But things have, in fact, improved – and Philip Tetlock’s superforecasting theory has undoubtedly played a significant role in said improvement.
At the center of his theory, as presented in the 2015 book Superforecasting: The Art and Science of Prediction, are findings from the The Good Judgment Project, an endeavor launched in 2011 by Tetlock, Barbara Mellers, and Don Moore. Their founding hypothesis was simple: they believed it possible to improve forecasting accuracy not in terms of achieving prophecy (seeing everything perfectly), but optometry (better, if still flawed, eyesight).
By most accounts, they also succeeded.
Rather than relying on industry-specific experts, as is commonly done, they gathered 284 “talented amateurs”, taught them forecasting best practice (including how to avoid cognitive biases that might flaw analysis), and then aggregated their predictions (82,361 in total) over a number of years to take advantage of the wisdom of crowds effect. Along the way, they kept score against the best experts.
The findings were startling. Although it would be silly to suggest that he was the first to point them out, the rigor with which Tetlock showed the flaws of assumed-to-be experts was brutal. Not only could he establish that the better known the expert was, the less reliable they were likely to be, but the expert’s accuracy also turned out to be inversely related to their self-confidence, and after a certain point even their knowledge. Rather than being rewarded for making predictions that were true, they we rewarded for making bold claims, and ended up being no better at foreseeing the future than the average newspaper reader.
By taking a different approach, Tetlock’s algorithm managed to outperform the experts in a number of prediction tournaments by between 35% and 72% (Tetlock now claims 50-80%). To stay true to the optometry analogy, vision was perhaps still blurry but significantly improved.
Needless to say, this caused somewhat of a ruckus. Bruised egos aside, if one is able to increase the accuracy of forecasts, the implications for fields as diverse as counter-terrorism, political policy, or commercial strategy, are obvious.
So what did they do differently? Well, in short – this is but an introduction, after all – the Superforecasters’ followed ten (ish) “commandments”:
Triage. Pick your battles wisely. Rather than spend time on simple “clock-like” problems or impossible “cloud-like” problems (to paraphrase Karl Popper’s famous analogy), concentrate on the optimum space in-between where hard work pays off.
Break seemingly intractable problems into tractable sub-problems. Decompose problems into solvable and unsolvable parts.
Strike the right balance between inside and outside views. Uniqueness is a matter of degrees; search for historical comparison classes for seemingly novel events.
Strike the right balance between under- and overreacting to evidence. Discount pseudo-diagnostic news to which crowds overreact and spot subtly diagnostic news to which crowds underreact.
Look for the clashing causal forces at work in each problem. As in classical dialectics, when thesis meets antithesis, there should be a synthesis.
Strive to distinguish as many degrees of doubt as the problem permits but no more. The more nuances of uncertainty that you can identify, the better forecaster you will be – but be careful not to ignore standards of evidence.
Strike the right balance between under- and overconfidence, between prudence and decisiveness. Balance the need to take a stance with the need to qualify it.
Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases. Do not excuse mistakes, but own them. Conduct careful post-mortems.
Bring out the best in others and let them bring out the best in you. Master the fine arts of team management: perspective taking, precision questioning, and constructive confrontation.
Master the error-balancing bicycle. Each of the commandments require practice; just as one cannot learn how to ride a bicycle by reading a physics textbook, one cannot improve one’s forecasting skills by merely reading manuals.
Do not treat commandments as commandments. To quote Helmuth von Moltke, it is impossible to lay down binding rules because two cases will never be exactly the same.
At first glance, there is obviously much to like. In contrast to the many loudmouth blowhards that claim to know for certain what tomorrow will bring, the charismatic Tetlock promotes humility (albeit while also using an intentionally elitist term such as Superforecasting, but let us ignore that for the time being), critical thinking, the acknowledgment of uncertainty, and learning by doing. His approach relies on continuous, incremental improvement instead of imagined immediate leaps, which makes further sense.
However, while demonstrably a superior approach, Superforecasting has limitations that are easy to miss. Applied in the wrong domain, or without the relevant contextual understanding, it can cause all kinds of strategic missteps. Some might even turn out to be lethal.
Next week, we will therefore dig deeper in Tetlock’s work, and attempt to unearth that which so many have yet to spot.
Until then, have the loveliest of weekends.
Onwards and upwards,
JP
This newsletter continues below with additional market analyses exclusive to premium subscribers. To unlock it, an e-book, and a number of lovely perks, merely click the button. If you would rather try the free version first, click here instead.