Complexity & Dummies
As some may remember, this week was originally intended to feature a newsletter for paying subscribers about the Heisenberg uncertainty principle and how it applies to strategic objectives and evaluation. However, a few days ago, Matthew Daniell reached out to me on Twitter and asked if there were a “complexity theory for dummies” of sorts.
While I, as I have repeatedly emphasized, do not believe any of my subscribers to be dummies – in fact, the entire premise of this newsletter is built around the fact that you are not – I also know that complexity is a new field to most. Consequently, I decided to postpone my intended piece and instead attempt to create a baseline introduction to the topic.
The original hypothesis
The history of science in general, and perhaps physics in particular, can be said to be the history of our intellectual evolution and progress as a species. The so-called scientific method – from hypothesis to experiment to evidence to theory – is a display of our very nature in force and the overwhelmingly powerful need that we have to link observation with prediction.
For the last few hundred years, this endeavor can with some generous spirit be summed up in analogous terms by Stephen Hawking’s famous desire to explain the universe by means of a single equation. The Newtonian universe is (as I wrote in a previous newsletter about Pierre-Simon, marquis de Laplace, and his infamous demon) one of mathematics, inherently deterministic and viewed as a clockwork of sorts. With enough calculative capacity, one would even be able to calculate all of the past and predict all of the future with perfect accuracy.
The impact of this original hypothesis of traditional physics is undeniable; the world that we live in is built upon its learnings, and the models and metaphors that we use are reflections of its worldview. From basic to higher education and beyond, we are taught that if we do A, then B will happen.
The same, as most are aware, is also true for strategy. Organizations are still largely constructed with presumed order in mind. We plan to control, we create structures and project milestones, we implement with aim towards the precise efficiency of the mechanical and so on.
The new theory
In the post-industrial knowledge era, the limitations and disadvantages of the mechanistic view of organizations, as Mary Uhl-Bien once put it, became obvious. Complexity theory, a result of (among other things) a number of different strands of advanced mathematics in combination with evolutionary psychology, has turned out to be the answer. Just as how quantum theory took Newtonian mechanics from certainty to probability, complexity scientists have been able to prove that the linear causalities that traditional strategic doctrine relies on simply do not exist.
In short, the field of complexity theory can be said to refer to the study of complex adaptive systems (CAS).
Complexity is not, contrary to popular belief, a higher state of complicatedness. The etymological root of the word is the Latin word plexus, which means braided or entwined. Complexus thus translates to “braided together”.
Adaptive refers to the system’s ability to learn, change and alter its behavior.
System is, in effect, a network of connections.
In other words, complexity theory has to do with, as Murray Gell-Mann once put it, the intricate inter-connectivity of elements within a system, and the system and its environment.
Complex adaptive systems have a few key features that make them distinctively different from ordered systems.
In CAS, there is no linear causality – the system is not causal, but dispositional (i.e., disposed to behave a certain way). Doing A can lead to all kinds of things and the size of the output may not be related to the size of the input.
Example conclusion: we can never guarantee an outcome, merely improve the odds of it happening.
Unlike ordered systems which are context-free, CAS are context-specific, rendering universal rules impossible.
Example conclusion: a strategy that worked for one organization at a given time is unlikely to work for another, or even the same organization at a different time.
The behavior of a CAS is emergent and only visible in retrospect – it comes out of self-organization, not external control, and cannot be predicted by study of individual parts.
Example conclusion: the works of people such as Michael Porter should be taken with more than a single grain of salt.
In CAS, due to the above, the whole is larger than the sum of its parts, and one can therefore neither take a single part and extrapolate it into the whole nor take the whole and reduce it into a single part.
Example conclusion: no company can be said to have succeeded because of any one thing (e.g., flywheel, customer obsession, a particular strategy etc.).
CAS are embedded into other CAS.
Example conclusion: teams are embedded into departments, that are embedded into organizations, that are embedded into markets and so on. Expecting everyone, every step of the way, to execute a strategy precisely as intended is setting oneself up for failure.
In CAS, the relationships between parts (agents) are more important than the individual parts themselves.
Example conclusion: strategic management is less about managing people than managing connections between people.
The consequences of complex adaptive systems on traditional strategic doctrine and management discourse are as profound as the theory of relativity is to traditional physics. They not merely force us to rethink how we work with strategy as such, but also how we diagnose problems, build teams, evaluate results and much more.
Indeed, they make us view strategy in a completely new light as we evolve from the company machine to the human organization.
And that is what this newsletter details, with plenty more to come.
Onwards and upwards,