Friends,
I hope that all is well with you and yours, and that this e-mail finds you on a boat with shoddy connection, in the tropics, three months after I sent it.
Now accepting keynotes for 25Q4-26Q2
The 2026 suite:
What to do when you don’t know what to do.
How to build a strategy that thrives in change. (Based on the new book by the same name.)
From last to first.
How to create peak performance in the world’s most competitive settings.
When the ground moves.
How to (actually) manage uncertainty. (Based on a new book by the same name.)
The volcano that birthed Frankenstein.
How contexts enable innovation — and how to take advantage.
Each may be tailored to fit the theme of the event. Completely conference specific keynotes are available at a premium.
If you want to book me for your event, corporate speaking slot, or workshop, merely send me an email. To make sure I am available, please do so at your earliest convenience; my availability is limited and the schedule tends to fill up fast.
More information may be found here.
The TL;DR
None needed this week.
Personal updates before we go-go
I have now been at home with Child #2 for nearly a month, as we decided to postpone her starting kindergarten slightly. Given how close Child #1 is to her mother, it has been really cool to see how we have bonded. She is now firmly in Camp Dad (for the time being), which just makes one’s heart jump.
I try not to get overly excited about new ideas; I have been an independent consultant for long enough to know that novel concepts rarely make it into the proverbial ether regardless of how good or practically helpful they may be. But dynamic uncertainty, as much as it remains a side project of sorts for now, looks to have potential far beyond what I initially surmised.
There is also a highly promising add-on that I have discovered and begun developing which solves, at least in theory, many of the problems related to decision-making under uncertainty.
The method, which I have decided to call WeftPoints, will be revealed in due time. For one, I need to develop it much further. But I will also require a few companies with which to trial the approach once I have done so. Thus, if you have (or are working high up at) an investment firm, a B2B SaaS firm, or a tech firm, and would like to try it out (free of charge, but with the intention of writing case studies) somewhere down the line, do let me know.
To be clear, it will not (cannot) have any negative effect on your work; it will be run in parallel to what you already do as an imaginary alternative path of sorts. I will elaborate for those interested.
On a related note, I may add a talk on WeftPoints to the 2026 keynote lineup. It would be ideal for any conference on strategy or operations focused on practical work. If that sounds like it might be up your street, again, let me know.
For reasons that will become obvious further down, today’s newsletter will be slightly different from the usual fare. If you are a premium subscriber, fret not: I will strive to make up for it and then some over the next two weeks.
Moving on to markets.
The market vitals
No market vitals today due to the one-off nature of the newsletter; there will be an in-depth analysis of the state of AI and tech investments next week, and a look at the seemingly underwhelming Nobel Prize in economics.
Moving on.
On writing and AI
Evolution or entropy?
As premium subscribers will know, today’s edition was supposed to be on tools and methods for decision-making under unresolvable uncertainty. However, due to an unanticipated flood of emails on the topic of how one might use AI to write, I will pause the regular programming for a quick interlude of sorts. Normal service will be resumed next week — and there will be a whole lot to unpick.
Anyway.
The background for our little excursion is the following: earlier this week, Axios released data claiming that (new) online articles generated by AI have begun to outnumber those written by humans. Axios, in typical clickbait fashion, thought this might be a landmark moment because
apparently researchers have “long feared” that a mountain of AI-made online content could make large language models “choke on their own exhaust and collapse”, and
Europol fell victim to what I call the Elvis fallacy* and projected that 90% of online content would be generated by AI as soon as next year.
Many observers seemingly concluded that this could only mean that people had stopped writing. Nonsense, of course. As Axios itself pointed out, the research was based on a very limited dataset (presumably skewed by AI news recaps and suchlike). The vast majority of published works remain human. Additionally, while it stands to reason that some people do rely on AI to provide ghostwriting services, most people likely use the tools to augment their work.
Which brings us to today and the question(s) that I was asked in any of many forms: how should one use AI in writing? Should it be used at all, or does that inherently diminish the quality of the work?
To be perfectly honest, I am not entirely sure why I am the person to ask; I do not consider myself an authority on the topic, nor do I see why anyone would think of me as such. But I am here to help to the best of my abilities, however limited they may be, so I will nonetheless oblige and give it the old college try.
My background as a writer is quite extensive; I penned my first professional (using the term loosely — I got paid) piece three decades ago. Artificial intelligence was a non-factor for roughly 29 of those years. The idea that I would use a tool such as ChatGPT to write for me is thus entirely preposterous. Not a single word that you are currently reading, or have ever read in this space, was written by an AI.
Having said that, I see no need to vilify those who do use it. Far from everyone is a natural writer (I might even go so far as to say that few are), but many may yet need to write, whether for work purposes or a special private occasion. Others may want to write — indeed have something very important to say — but are simply incapable of formulating their thoughts on the screen; human frailties such as dyslexia might prevent it. To dismiss their work as intellectual entropy merely because they used an emerging technology would be positively Luddite.
Furthermore, not every use of artificial intelligence in writing involves it actually writing anything. A lot of authors use it to do research (which is fair, at least so long as one verifies the sources “manually”), for instance.
The problem, I feel, is that people are increasingly using AI to do the work (or parts of the work) for them without actually trying first. And so, they never learn to write properly; they never find their voice.
When I was in law school, a fair few of the students in my year were convinced that the key to being a good lawyer was to learn the specifics of rules, regulations, cases, and verdicts. This, of course, is also the image of the lawyer that we most often see portrayed on TV shows; consider the genius Mike Ross in Suits who, thanks to photographic memory, is able to recall every obscure ruling from the last few millennia. But if such knowledge is how you define yourself as a lawyer, you will soon be replaced by an AI that will know even more. The actual key to being a good lawyer is learning how to think, which is also what law school was about. The parts are less important than the whole.
The same applies in the present context. The key to being a good writer goes beyond the words on the page; that is mere copywriting. It is about how one thinks, puts sentences and arguments together, and adds the excesses that, strictly speaking, are not needed but make whatever you have written uniquely yours.
The second question, then, becomes rather important. How can one use AI to improve one’s writing while maintaining one’s own reasoning, style, and voice? The answer is to do pretty much the opposite of what most people do.
If we ignore those who completely rely on AI to do the writing for them, the average user of those who remain tends to ask the tool (henceforth ChatGPT, since it is the one with which I am most familiar) to improve what they have already written by suggesting linguistic improvements: superior turns of phrase, stylistic corrections, structural robustness, and so forth. Although there is nothing wrong with that, the exercise easily goes from proofreading to changing the original author’s voice.
A far superior approach is to first ask the tool to viciously attack the text as if you had dedicated it to the criticism of its children. The process that I use looks like this:
Before each round of criticism, add the following prompt:
You are a critic, not a writer.
Do not rewrite; provide commentary, not example sentences.
Quote exact phrases when citing issues.
If uncertain, mark with [?] and propose what would resolve it.
Refer to paragraphs by [P#]. If my draft is not numbered, number it first.
For generic advice, add:
Critique my draft as a professional editor.
Return exactly these sections:
One-sentence verdict
Top five strengths (quote each)
Top five risks (quote each; say why it is a problem)
Actionable fixes (pair each risk with one concrete change; no example sentences, only descriptions)
Evidence and sourcing gaps (list claims + how to verify)
Sensitivity scan (flag stereotypes/sloppy proxies; suggest neutral swaps)
Keep list (lines NOT to touch; quote)
For specific advice, add:
Critique this draft through the lens of: [strand of thinking].
First, define the lens in two lines (criteria).
Then evaluate by criteria with:
Findings (quote passages)
Impact on reader (one line each)
Minimal fix (one line each)
Close with a five-point scorecard (criteria x/5).
Do not rewrite; provide commentary, not example sentences.
Example specifics (strand + criteria) include:
Rhetoric (ethos/logos/pathos): credibility, logic, balance.
Logic and fallacies: premises, inference validity, common fallacies.
Behavioral science: friction, cognitive load, loss aversion, social proof.
Data and evidence (newsroom): claim traceability, date/denominator/definition checks.
Legal risk (defamation/privacy/IP): assertions of fact, identification, harm, sourcing.
Reader experience (UX writing): scannability, hierarchy, action clarity.
Academic/peer-review style: novelty, method transparency, limitations.
Ethical persuasion: informed choice, respect for autonomy, fair framing.
There are, of course, multiple ways of achieving the same thing. For one, you may ask the tool to question your writing by telling it to ask the ten (or however many you prefer) Socratic questions that would most improve the overall argument, or have it red-team/blue-team the claim you are making (i.e., identify the strongest opposing case using verifiable lines of argument, and indicate which points survive intact, which need revision, and what evidence would decisively resolve the dispute). Just go with whatever floats your boat.
The end result will be a stronger text, but more than that, the exercise will force you to think about what you have written. This will sharpen the mind, which in turn will sharpen the pen — not just for the present, but for the future.
And, well, that was that. I hope the off-piste was not devoid of snow but full of rocks.
Next week, premium subscribers will go back to decision-making.
Until then, have the loveliest of weekends.
Onwards and ever upwards,
JP
*The Elvis fallacy is the assumption that extrapolations made from limited data points will grow unconstrained. It comes from the fact that, in 1977, the year when Elvis Presley met his untimely demise, there were 177 Elvis impersonators in the world. By the year 2000, there were 85,000. If one extrapolates these two data points — using exactly the same kind of formula and methodology that is standard practice — one will find that by March 3, 2031, every single person on the planet will be an Elvis impersonator. Things change. Curves even out.



Thanks for the insights and tips, JP.
I use a writing tool that allows me to create RAG knowledge bases. I use these knowledge bases to have "experts" critique my writing.
For example, I have a "Dave Snowden" knowledge base where I've cribbed dozens of Dave's YouTube videos. When I'm writing about problem-solving in complex adaptive systems, I may ask the LLM "How might Dave Snowden critique these paragraphs?" or ""How might Dave Snowden clarify this concept?". Usually, I get a useful response that helps me tighten up my writing...and I increase my understanding of the topic, as well.
I've found the LLM tools generate bland writing in general but may produce custom/specific insights that help me writer better.