January 27, 2026
Most AI transformations aren’t failing for technical reasons

Most AI transformations aren’t failing for technical reasons
Three things I read recently clicked together.
One was research from Boston Consulting Group.
Another was Anthropic’s recent overhaul of how they train Claude.
The third was McKinsey’s work on building leaders and capabilities in an AI-enabled world.
Taken together, they point to the same issue.
BCG highlight what they call the 10–20–70 rule for AI success.
- 10 percent of effort goes into algorithms
- 20 percent goes into data and technology
- 70 percent goes into people, processes, and ways of working
Most organisations do the opposite. They buy tools, run pilots, and hope the transformation follows.
The pressure is mounting. Many CEOs now say they are personally accountable for AI outcomes, and a growing number believe their role is at risk if AI does not deliver material value within the next couple of years. Yet only a small minority of organisations have moved beyond proof of concept into sustained impact.
Anthropic’s work offers an interesting parallel.
Rather than refining rules or adding guardrails, they rewrote Claude’s underlying framework. The shift was from telling the system what to do to explaining why certain principles matter. Safety and human oversight first. Ethics next. Then rules. Then usefulness.
That distinction feels important.
McKinsey’s research on leadership and capability building makes a similar point. Technology on its own changes very little. Value comes when organisations redesign decision-making, accountability, and operating models around new capabilities, rather than bolting them onto old ones.
The pattern is consistent across all three:
For organisations, deploy tools, hope for transformation, then wonder why nothing changes.
For AI systems, follow rules, struggle with edge cases, and fail to generalise.
The alternative is harder, but clearer.
Redesign how work actually gets done.
Clarify where judgement sits.
Rebuild workflows, handoffs, and decision rights.
Then let the technology amplify that.
BCG describe a bank that did not just add AI to lending. They redesigned the end-to-end process around the opportunity. The value came from reorganising the work, not simply automating steps.
This also connects to something I wrote about recently on awareness and metacognition. If we do not understand how we think, decide, and work today, AI simply accelerates existing patterns.
AI is not primarily a technology deployment problem.
It is an operating model and leadership problem.
I am curious where others are seeing this. Where has AI forced a rethink of how work is organised, not just how fast it gets done?


