Friends,
I hope that all is well with you and yours.
Normally when I begin these newsletters, I do so with a short, to most irrelevant, update on what has happened in the life of yours truly of late. Today, however, the news item that looms significantly larger than all others is that the e-commerce report that I have co-authored with James Hankins is, at long last, about to be published.
On February 6, The Gravity of e-Commerce will become available via WARC’s website. The paper is sitting at around 50 pages in total and features not only our outlook on current market realities, but also a number of actions that e-commerce and digital DTC focused companies might want to take as a result.
In order to ensure that every subscriber to Strategy in Praxis has the option to download our work, I will provide a link next week that ensures free access for the following weekend.
I hope that you will like it as much as we do.
Now on with the show.
In the news
Peloton announced “best quarterly performance in a year” and saw its stock price jump 26% in a day - even though revenue fell 30% YoY and the net loss came out at a nothing-to-sneeze-at $335.4m.
Mark Zuckerberg declared “the year of efficiency” for Meta, which helped company shares soar 20%. This despite a 4% Q4 sales decline (though, to be fair, in the upper reaches of projections) and a projected 2023 expense growth between 1.5% and 8%. Stock is up 70% in the last three months.
The Chinese market is opening up again after three years of hard Covid restrictions. Policy makers are trying hard to get the 1.3 billion population to revive the economy, but firms looking to take advantage should factor in the upcoming Taiwanese election and what its implications might be for China-US relations.
The gender pay gap widens in adland as companies look to increase operational efficiency. Given that firms first cut that which they deem least valuable, it sends all the wrong signals.
Item of the week
MarketingWeek columnist Mark Ritson, never one to shy away from grand statements, declared on Monday that one could now finally prove that effective broad reach, brand-building advertising boosts short-term sales. Given the enormous appetite that marketers still have for effectiveness, this was taken as very big news indeed. Of course, that most any positive long-term effect must first have a short-term impact has been known for literally decades.. ..to some. When I pointed it out by posting a piece of research from 2009 (one can go back farther still) - The Total Long-Term Sales Effects of Advertising: Lessons from Single Source - it caught many by surprise. A key quote:
On average, there was a ‘carry-over effect’ from the first year's sales increase into the second year, which, in turn, carried over into the third year, but at diminishing rates. In these tests, sales increases in the initial year, second year, and third year were due to a higher impact on sales volume than market share. In the 13 tests where there were no successful sales increases in the first year of testing, reanalysis at the second and third year revealed no carry-over effects (Lodish et al., 1995). Thus, the effects of advertising can last in the long term (over a year), but must be preceded by short-term effects.
Back to the future
Superforecasting goes from helpful to hurtful when the outlier is ignored in favor of the norm
As discussed last week, Philip Tetlock’s claim to well-deserved fame is that the accuracy of predictions can be improved upon - not by mere gained expertise (on the contrary, expert accuracy turned out to be inversely proportional to their level of knowledge), but careful and deliberate Bayesian practice. The implications, as if it needs writing, are as wide-ranging as they are bleedingly obvious.
For strategists in general, and those with a penchant for the theoretical determinism of strategic planning in particular, any ability to better see the future dramatically improves one’s work in the present. To be clear, Tetlock does not make any such promises. His point is that by assigning an event a confidence level (e.g., 72% probability), one can ascertain what is more likely to happen than not.
The problem, as we have discussed in this space before, is that complex adaptive systems such as markets and organizations do not have an established sample space; it is not infinite, but indefinite. Put in more colloquial terms, it means that accurately calculating the probability of a future event is impossible a priori. It is not only that we do not know what might happen, but that we cannot know.
As a result, the confidence levels assigned to various events by superforecasters will be (sometimes more, sometimes less) educated guesses, aiming to do no more than improve upon the proportion of accurate prognoses overall. While that is, at least on average, better than what most others can come up with, it also means plenty of low-hanging fruit will be picked to make up the numbers.
The most common prediction will then be that things will remain the same - because they typically do (an in-depth explanation as to why will be found in the new book). To reuse an example from our conversation on strategic drift, none of the highly sophisticated models used by the US government, run on the most advanced supercomputers, has been proven better at predicting GNP trends than the naïve forecast which states that what happened over the last three months will probably continue for the next three months.
It is not a bad rule of thumb, in the grand scheme of things. But, crucially, even a 99% probability of a future event does not guarantee that it will come to pass. And this, applied to strategy, is where superforecasting can go from helpful to hurtful.
If a firm bases its strategic decisions entirely on what is considered most likely to happen, two pitfalls immediately open up. Firstly, we become brittle to outlier events that we either did not foresee, or believed to be highly unlikely and therefore ignored. The pandemic should serve as a recent reminder (FWIW, superforecasters assigned it a 3% likelihood). Secondly, as data becomes more and more democratized, chances are that our competitors will see and do the exact same thing; best practice becomes common practice becomes average practice. Asymmetrical gains and competitive advantages come off the proverbial table.
Any strategist worth their salt would thus do well to remember that while superforecasting can assist in defining the layout of a future space, some things will be easier to see than others, and there will still be plenty left entirely invisible. By nature, the only data we can access will be based on history. As much as it famously repeats itself, we are not where we once were. Fat tail events will eventually force us to change course, so you had still better develop the ability to adapt.
Or to phrase it slightly differently by building upon an original line from the We Are Not Saved newsletter (the critique of superforecasting of which I highly recommend for those looking to go deeper): given that we have already survived the past, our difficulties lie in the future. It is easy to assume that being able to predict that future would go a long way towards helping us survive it, yet if we are not careful, we might mistakenly believe the rocks under our feet to go on forever and one day walk off a cliff.
Next week on Strategy in Praxis
As usual when we are wrapping up a theme, we will conclude with a practitioner’s inside interview. When it comes to forecasting, there are few better to talk to than Andrew Willshire, fresh back from his global travels. Needless to say, he will bring a no-nonsense view that, I am certain, will put everything that we have discussed into a valuable, practical context.
Until then, have the loveliest of weekends.
Onwards and upwards,
JP
This newsletter continues below with additional market analyses exclusive to premium subscribers. To unlock them, an e-book, and a number of lovely perks, merely click the button. If you would rather try the free version first, click here instead.