Friends,
I hope that all is well with you and yours, and that this e-mail finds you on a boat with shoddy connection, in the tropics, three months after I sent it.
Today, we go deep on a slightly controversial topic: is the point of strategy to solve problems? The previous newsletters on complexity science may have revealed the answer.
Also, as ever, the market vitals and the latest AI news - featuring Elaine Benes’ dance moves, the potential of Trump’s fiscal policies, market volatility on the horizon despite the Fed’s attempts to steer clear of it, a brewing public dispute, antitrust cases against Meta and Google, the true nature of AI generated code, and Microsoft’s odd numbers.
Now accepting keynotes for 25Q1-25Q3
Every year for the last decade or so, I have created three main presentation decks. For 2025, however, I have (for the first time) added a fourth due to popular demand. They are:
What to Do When You Don’t Know What to Do: How to turn change into a competitive advantage. (Based on the new book by the same name.)
Leadership in a Time of Change: How to steer an organization through a sea of uncertainty.
Resilient Retail: How to build a profitable retail business in the modern marketplace. (Based on the 2025 follow-up to the highly praised 2022 white paper The Gravity of e-Commerce.)
Artificial Intelligence Beyond the Fantasy: How to understand the narratives, risks, opportunities, and best uses of a new technology.
If you want to book me for your event, corporate speaking slot, or workshop, merely send me an email. To make sure I am available, please do so at your earliest convenience; my availability is limited and the schedule tends to fill up fast. More information may be found here.
A couple of updates before we go-go
Those who know will know: getting a soon-to-be three-year old to stop using her pacifier must be among the most grating experiences known to man.
Before I had children, I did not know that it was possible to love something as much as I love navy strength gin.
I am kidding. But Jesus suffering fuck.
Keynotes for 2025 are rolling in. Given the family situation, dates will be limited; if you want to bring me in, now is thus very much the time to get ahead of the competition and contact me.
My larger pieces of research for the next year are also starting to materialize. James and I will have another white paper on modern commerce, with an official reveal likely to come at Cannes as usual. There is also a potential joint venture between us and someone else (who you will know) in the pipeline. Additionally, I will likely write academic papers on adaptive strategy and my 4E model of market dynamics, respectively.
On to markets and AI:
Markets
A number of stock exchanges, not least the US ones, rose upon the news that the orange man with the worst dance skills since the days of Elaine Benes had won the election. Some were quick to point out that this must be due to the belief in Trump’s fiscal policies, but I hold that as highly unlikely. Rather, it was probably related to relief over a clear winner and a lower risk of turmoil.
As we have discussed before, the former-soon-no-longer president is hardly the financial savior that some, for whatever reason, believe him to be. Higher inflation, deficits, and higher interest rates appear more likely than not, and global trade stands to take a liver punch.
It is also worth noting, given the tech bro erections over Musk’s potential role in the new government, that the US states’ antitrust cases against Alphabet and Meta began under the first Trump administration.
So, no, I would not exactly bet my house on the man bringing in a new era of growth.
The Fed decided to cut the interest rate by a quarter of a percentage point, much as everyone knew they would, and is now shifting to a “careful and considered approach”. Which obviously makes you wonder when the hell they were reckless and spontaneous, but anyway.
What it translates to is a wait-and-see stance. Although some have argued that it is good that they are “data-driven”, as if they ever were not, the bad news is that it means that they will be data-driven. That is to say, the market will follow every tiny droplet that ripples the pond and overreact accordingly, much as it did this past spring.
Trump and (Fed chairman) Powell have also publicly sparred, which means that it is anything but guaranteed that the two will be able to work together in perfect harmony. In short, they are at odds over the role the president should or should not play in monetary policy. Unsurprisingly, Trump wants a say, but Powell has held firm that the Fed should be independent. I am, shall we say, inclined to agree.
Further, as Callum Keown noted for Barron’s, Trump’s populist plans of increasing government spending while cutting taxes means that he will ultimately need the Fed’s help through lower interest rates. However, Trump’s plans are also likely to stoke inflationary pressures, which in turn would prompt a hawkish response with higher interest rates - and continued animosity.
You get what you vote for.
AI
Google recently got a lot of attention after the firm claimed that roughly a quarter of its new code is generated by AI (and reviewed by people), which to some may sound amazing but in reality could be anything but. While it could, as Benedict Evans pointed out, merely mean autocomplete, it could also be that the AI is indeed writing the code. The problem is that it may not be a good thing.
Using AI to generate code is not new. A couple of years back, I did a keynote at Techsylvania - one of the largest tech events in Europe. While there, I spoke to some of the other speakers, many of whom were considered deities in the relevant circles. The common refrain was that the code writing was much faster, but the bug fixing a nightmare. Instead of spending 45 minutes on the code, and 15 minutes sorting out the kinks, programmers were spending 45 seconds generating the code and three hours chasing issues. When I recently checked in to see whether it remained the case, every single person I asked said yes.
In other words, just because Google is using AI to generate new code, it does not mean that it is particularly good (or even works properly), nor that any overall efficiency gains are actually realized.
Meanwhile, Microsoft has been bragging about how many companies use Copilot. However, it appears that the numbers were a bit iffy; a single employe using the service less than once a week allegedly sufficed. That is hardly what the claim implies. My own experience with Copilot is non-existent, but based on client and colleague feedback, its most consistently used feature seems to be “switch off”.
Moving on.
The problem problem
What is the point of strategy, really?
Over the last few weeks, we have recapped the most important learnings from complexity science as it applies to strategic management and demonstrated the astounding explanatory power that it holds. We have also established a number of necessary theoretical foundations - from Nobel prize awarded theories of non-equilibrium and systemic behaviors, to principles such as self-organization, emergence, the adjacent possible, and the remarkable stability of macro patterns that cannot be explained by observation of micro actions.
Today, however, we are turning our collective gaze firmly towards the practical.
One of the most common issues in strategic management is the speed with which analysts conclude that anything that worked was the result of a good strategy, while everything that failed was the result of a bad one. Beyond the psychological (halo effects) and the logical (the informal fallacy of appealing to purity), it just is not a grown-up line of reasoning. Everyone who has actual working experience knows, probably only too well, that a great strategy may lead to a bad outcome because of factors outside of the organization’s control, just as how a bad strategy may lead to a great outcome because of, well, luck.
A number of more pragmatic thinkers, such as Richard Rumelt, have therefore taken to argue that strategy above all else should be a tool with which to solve problems.