Friends,
I hope that all is well with you and yours.
Much as last week, there is little use ignoring the e-commerce elephant in the room. The Gravity of e-Commerce, the new report penned by yours truly and James Hankins, was released on WARC this past Monday. It gives me (and I believe I can say us) great pleasure to see how exceptionally well it has been received, particularly when what is in it is not all bunnies and candy floss.
As promised, I want to ensure that it is available to all who subscribe to Strategy in Praxis. Thus, if you click this link, you will be able to read it, completely free of charge, until Sunday.
There is also a podcast out in which we discuss the paper with David Tiltman.
Moving on.
In the news
M&A activity is starting to pick up, much as premium subscribers knew was likely to happen. While still down thus far on 2022 levels (it is a gathering of pace, not a full on sprint), the financial climate has opened up a plethora of opportunities for inorganic growth as companies dependent on external capital struggle to stay afloat.
Bed Bath & Beyond has, somehow, found an anchor investor, while the rest of us were scratching our heads. Hudson Bay reportedly put in as much as $1B to “provide the runway for the turnaround”, but whether that is enough to make the company fly remains to be seen.
John Lewis has decided to review its ad strategy, with long-time agency Adam&eveDDB walking away as a result. It is, I believe, an understandable decision from both parties. Reviews often come with a changing of the guard, and John Lewis has been struggling for a long time. There was also limited upside to their previous strategy. A&E, on their end, probably saw what was in the cards and thought it had better things to do than play a game it knew it was likely to lose. Either way, an end of an era.
Softbank lost $5.9B in Q4, of which its Vision Fund stood for $5.8B. Yoshimitsu Goto, the company CFO, told reporters it “simply wasn’t a good time to invest in startups”. While perhaps true, it does make you wonder why spending like crazy when companies are at their peak would be a superior approach. Value investment 101 is to find undervalued assets, acquire them, and wait for the market to realize its mistake. Doing the opposite is what got Softbank in trouble in the first place.
Item of the week
As long-time subscribers will know, I consider Stuart Kauffman to be a generational thinker. Earlier this week, I referenced one of his latest papers, Creative evolution in economics. While admittedly rather dense - I suggest skimming past the advanced mathematics unless they are of particular interest - it is a brilliant read on the adjacent possible, a concept absolutely crucial to strategy-making (which led me to formulate the MacGyver theorem). The fact that we cannot pre-state all uses of so simple a thing as a screwdriver illustrates perfectly what we have established over the last few weeks: it is impossible to predict the future in detail, so we had better build the capacity to adapt.
The uses of a screwdriver depend sensitively on who is using it, when and where, for what purpose, the state of mind of the person using it, randomness, serendipity, and so on to form an unlistable state of dependencies. There is no algorithmic way of listing the uses of a screwdriver without destroying some essential screwdriver-ness imaginable by someone in a context we cannot pre-state. When we reduce a screwdriver’s production possibilities to some algorithmic form and thus to some deterministic set of uses, we remove all the information about uses unexplored by the algorithm. If someone had been clever enough to fashion a broken screwdriver into a horseshoe nail, a kingdom would not have been lost.
Now, then.
As always when we conclude a theme, I like to do so with a practitioner’s insight interview. And so we also do today. Introducing: Andrew Willshire.
For those who do not (yet) know you, could you give us a short summary of who Andrew Willshire might be?
An engineer by training and inclination, I’ve been working in media and marketing for 15 both as a consultant analyst and in designing and building software analytics tools. My last permanent role was as Global Director of Advanced Analytics at Maxus (WPP) and I’ve been freelancing full-time since 2017 through my company, Diametrical Ltd.
I still do econometric modelling for a number of clients, but I’ve also consulted on designing software platforms and databases, and a range of other analytics and automation projects.
I’m particularly interested in “strategic analytics”, which might be defined as how a business can use data to help deliver on their broader business strategy and not just leaving analytics siloed within individual business units and functions.
Over the last few weeks, we have been discussing forecasts. At the moment, predictions are proven wrong on a minute-by-minute basis, and professional credibility has declined accordingly. Where do you think analysts go wrong when they try to model the future?
First point – it’s not just analysts who get it wrong, it’s also the people who ask the analysts to do things that aren’t possible. If you are determined to pay someone to build you a car out of marmalade and twigs, then you will eventually find someone who will do the job. When you discover that it isn’t much of a car, are they the only person who is to blame for the outcome? They might reasonably say they did the best they could given the requirements.
In this sense, it’s often the whole system which sits around forecasting that is at fault.
My own model of using data in organisations has four tightly linked stages – Measurement, Data, Models, Decisions. The place to start (and where people seldom do) is to examine the decision that the forecast is trying to inform. What do you actually want to know and why? What decisions are even open to you, practically and politically within an organisation? Imagine the full range of forecasts that could be presented to you, and what you would actually do differently depending on what they say. Where are the breakpoints in your resulting decision tree? If you are forecasting sales of a product and the only result that would actually trigger a change in your action is whether sales are likely to grow or decline this year, then you have come up with a much better question than simply “What will product sales be this year?”.
Einstein is popularly quoted, albeit likely wrongly, as having said, “If I had an hour to solve a problem I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.” So, first and foremost, give the analysts a good question to answer.
Once the question is set, the analysts will probably look at what data is available to answer the question and will try to pick a model which links the available data to the question. There’s always a danger here that the data that is readily available is greatly preferred over what it might be possible to gather if you were determined to do so. This is especially the case when so much data is effectively acquired absent-mindedly by organisations. Many organisations moving into using data science for the first time will hire an analyst, point them at a data lake and tell them to “Do something with that. Report back in six months”. This usually results in an unhappy experience for the analyst and the organisation.
So, take a step back and think about measurement. If you could know any one thing about your business, what would it be? Do you have an answer for that? If not, then how can you know what you want to know within the constraints of your business/budget?
It might not be possible to get exactly what you want, but by at least thinking about it, you can consider what the closest available proxy is, and also perhaps take steps to gathering certain data intentionally from this point forwards so that next year, you have something closer to the data that you actually want. In addition, think about how the data points that have been collected already were measured, and be alert to any biases or inaccuracies that might have influenced them.
From my experience, companies also frequently misunderstand the extent to which events are possible to model at all. Is that something that you have seen as well?
Yes, you’ve got to understand the limitations of the model that is being used. Don’t confuse the messy world of aggregate human behaviour with the tidy world of physics.
Recognise that there are two types of model – those built on fundamental laws and those which are superimposed onto the available data. For example, you can model the expected distribution of heights in a given population with a normal distribution. But that distribution is the outcome of a huge range of interacting biological phenomena, not one identified with a particular process. (And, in fact, the normal distribution is a bad model for human height because the tails are too long – there are many fewer very short/very tall people than such a distribution would suggest, even if it fits tolerably well in the centre!)
It is extremely rare for models used in business and marketing to actually replicate the underlying physical processes, unlike, e.g., modelling fluid dynamics or electrical current flow. This means that the models are much more limited in predicting outside of the range of past experience (or the “training data” in some cases). They are often entirely useless when it comes to forecasting the impact of events that have never previously been experienced. They are filled with assumptions and biases, often unexamined. One obvious example is that many of the mathematical models used to assess the risk in the sub-prime housing market before the 2008 crash contained the assumption that defaults on mortgage payments were statistically independent events. But it is pretty obvious that this is not the case – if a large factory closes down in a small town, the chances of a large number of defaults in that town rises dramatically. These events are correlated but were treated as though they were not in order to apply a specific modelling technique.
This is a serious limitation on the power of these types of model and accordingly you should be very cautious about how they are used.
For many strategists, the immediate use of forecasts is to figure out what might be happening tomorrow. The secondary use is what to do today as a result. How do you balance playing it safe with being aggressive?
Well, as I said, you don’t have to have the forecast first to figure out what to do.
Russ Ackoff, in his book “Management f-Laws” states that “The future is better dealt with using assumptions rather than forecasts.”
“Forecasts are about probabilities; assumptions are about possibilities. We carry a spare tyre in our cars not because we forecast that we will have a puncture on our next trip but because we assume a flat tyre is possible. […] The thermostat that controls the heating-cooling system in a building does not have to predict future weather in order to control it.”
So, if you want to “play it safe”, you create an organisation that knows how to respond to any forecast, and can swiftly adapt when the forecast inevitably turns out to be not quite right. “Being aggressive” is a management decision, not a style of forecasting. A high-risk strategy would be to assume that an improbable-looking forecast is correct, and fully commit to its implications, regardless of the potential catastrophic impact on the business if the forecast is wrong. Strategy textbooks are littered with examples of once-great companies that took an all-or-nothing punt in the wrong direction.
In marketing, there has been a lot of debate around short vs long of late. It would appear that one reason why short has been on the rise so much is the immediacy of it, whereas the long is often used as a get-out-of-jail-free card whenever a commercial effect cannot be demonstrated early on. What are your experiences?
I find it entirely improbable that an activity has a long-term effect without also having a short-term effect. In fact, I wrote a piece about just that a few years ago.
To believe that it could is to assume that a message will not influence someone who is about to make a purchase decision in the next 24 hours, but that it will influence someone making a decision a year from now, despite all the other messages that that person will be exposed to between now and then.
Econometric modeling has become a popular topic again, not least since Meta have started being vocal in their support of the method. You have more experience in the field than most. What do you consider the strengths and weaknesses of such models?
Econometric modelling is a perfect example of the type of model mentioned previously. We are not modelling the millions of individual purchase decisions, we are attempting to fit a model structure to some outcome data.
Advantages of this approach: it’s straightforward, fairly well understood, relatively low cost and can lead to quick wins in terms of making better spending decisions.
Disadvantages: you never really step in the same river twice, especially when so much is constantly changing. So while the technique is pretty good at telling you what was effective in the past, there’s no guarantee that the same approach will definitely be effective in the future, especially when the agency has changed the creative, the audience, the targeting criteria, etc., while the world has also changed in terms of economics, fashions and other trends. It’s about as far from “ceteris paribus” as it's possible to get. Also, the outcome can be heavily influenced by the modeller’s own biases and opinions, and by the choice of what is included and what is left out of the model.
As an approach I would recommend it to many clients because it’s probably better than any other option that they have, and almost certainly better than doing nothing and relying on gut feel. But you do have to understand the limitations, and this is where a good practitioner is invaluable. With my own clients, I am always extremely open about the degree of confidence that I have at any given time in a recommendation. Sometimes this is regarded as fence-sitting or even a reluctance to stand by my conclusions, when in fact it’s just a reflection of the inherent uncertainty in the process which should be communicated. Beware practitioners who always seem certain that they’re revealing some divine truth to you. Their models have the same uncertainty in them, they’re just not being open about it.
And there we have it. A massive thank you goes to Andrew for generously sharing his experiences.
Next week, we are moving away from forecasting to a rather different newsletter. Stay tuned, and have the loveliest of weekends.
Onwards and upwards,
JP
This newsletter continues below with additional market analyses exclusive to premium subscribers. To unlock them, an e-book, and a number of lovely perks, merely click the button. If you would rather try the free version first, click here instead.