The System — II

The Median Trap

Leo Cunningham

You have been told that AI is your competitive advantage.

Here is the problem. You bought it from the same three vendors as the organisation across the street. You are feeding your data into the same consensus engines, asking the same questions, and receiving the same statistically probable answers. You are not pulling ahead. You are paying for the privilege of running in a tighter pack toward a median centre.

This is the Median Trap. And the data confirms you are already in it.

The Architecture of Average

To understand why, you need to understand what you actually purchased.

McKinsey's 2025 global survey identifies two distinct categories of AI adopter. Shapers — organisations that customise models against proprietary, outside data to generate genuinely differentiated outputs. And Takers — organisations relying on off-the-shelf public models that optimise for the most probable, consensus-based answer.

Eighty-eight percent of organisations have adopted AI. The overwhelming majority are Takers.¹

This is not a criticism of your procurement decision. It is a description of your mathematical position. A Taker architecture — one that thrives on the average of everything that has already been done — places a governor on your ability to generate anomalous outcomes. You have not bought a weapon. You have bought a constraint dressed as one. You traded Alpha for Average.

By 2026, the consequence is visible in the market. Decision overlap between competitors in AI-saturated sectors has reached 92%. Product roadmaps are converging. Strategic recommendations are indistinguishable. The consensus engine is producing consensus strategy. This is not a coincidence. It is the logical output of identical inputs.

You call it AI adoption. The market calls it Alpha Decay.

The Clinical Evidence

The most important research currently available to any senior executive is not in a vendor deck.

Dell'Acqua and colleagues conducted a field experiment on knowledge worker productivity across inside-the-frontier tasks — those the AI handles well — and outside-the-frontier tasks — those requiring anomalous, non-consensus thinking. The finding is precise and should be read carefully.

For tasks inside the frontier, AI augments performance. For tasks outside it — the genuinely anomalous, the strategically decisive, the competitively differentiated — AI use produces a nineteen percentage point drop in accuracy.²

Read that again.

The tool you have deployed to find your competitive edge actively punishes the thinking required to find it. It does not simply fail to help. It makes the outcome measurably worse. The system optimises for the probable. The anomalous answer — by definition — is not in the training data. If it were, it would not be an advantage. It would simply be the norm.

Your AI is exceptionally good at writing the itinerary. It is a clinical liability when asked to find the move nobody else has made.

The Fragmentation Beneath

There is a further complication your dashboard is not showing you.

Nearly 54% of your workforce is already running generative AI tools your organisation has not authorised. Among Millennials and Gen Z — your next generation of senior talent — that figure rises to 62%. Meanwhile frontline AI usage has stalled at 51% and is actually falling, while leadership usage climbs to 78%. Only 25% of frontline workers report receiving adequate guidance on how to use these tools effectively.³

Your unified AI strategy is not unified. It is a leadership-frontline fracture producing ungoverned, unindexed, unauditable outputs at scale — while the people closest to your operational reality are either ignored or going around the system entirely.

The system is eating itself.

The Tax on the Median

McKinsey's EBIT data closes the argument. Eighty-eight percent adoption. Thirty-nine percent EBIT impact. A forty-nine point gap between what was promised and what appears on the bottom line.¹

This is not a gap that will close with further investment in the same architecture. It is the measurable cost of Taker thinking at scale. Every pound of CapEx allocated to consensus-engine AI is not an investment in competitive advantage. It is a tax on the median. It accelerates your organisation toward the centre at increasing cost while the anomalous winner — the one who found the exit — moves in the opposite direction.

The real signal is never in the training data. If it were, it would already belong to everyone.

The Silent Test

Before you move to the next essay, take sixty seconds.

Open whatever AI tool your organisation currently uses for strategic output. Type this prompt exactly as written:

"What is the single most unconventional competitive move available to a company in our sector right now?"

Read the answer. Then ask yourself one question: could your three nearest competitors have received this same response?

If the answer is yes — or if you are not certain it is no — you have just measured your own position in the median. The system has told you something plausible, well-structured and strategically inert. It has told everyone else the same thing.

That is not intelligence. That is consensus dressed as insight.

The anomalous answer was never in the training data. Which is precisely why you will not find it here.

Diagnostic

Ask your strategy team to produce your three most significant competitive recommendations from the last twelve months. Then ask them to identify what proprietary, outside-the-frontier data those recommendations were built on — data that your competitors could not have fed into the same system.

If they cannot answer that question cleanly, your strategy is a Taker output. Statistically indistinguishable from the organisation across the street.

That is your position in the median.

The anomalous winner is somewhere else entirely.

¹ McKinsey & Company (2025). The State of AI in 2025: Agents, Innovation, and Transformation.

² Dell'Acqua, F., et al. (2025). Navigating the Jagged Technological Frontier: Field Experimental Evidence of AI on Knowledge Work.

³ Boston Consulting Group (June 2025). Beyond AI Adoption: Capturing the Full Potential. bcg.com

Written beyond the air-gap.

← The Inversion Point The Fidelity Gap →