THE CHOICES AHEAD
- Jan 15
- 5 min read

How to convert capability into defensible productivity
As acknowledged in our previous chapter, the collision between the exponential improvement of AI does not match the ingrained pace of institutional change management. Linear systems do not spontaneously accelerate when they need to keep up.
So whether an organisation successfully adopts AI - or not - depends on a series of choices. Choices that are often framed as operational or cultural, but are in fact strategic. Because they collectively decide whether AI potential is converted to improved productive reality.
And these choices are being made now. In every boardroom. In every team with jurisdiction over their work. They are operating simultaneously at the level of individuals, teams and institutions. And together they determine who moves onto the high-performance branch of the K-curve that has been quietly forming underneath the AI debate.
Culture and judgements of value
The first and most consequential choice concerns what an organisation chooses to focus its people on.
As digital intelligence becomes cheaper and more abundant, the value of what is uniquely human shifts. The work that now limits progress is not active ‘doing’ - the AIs are more than capable of that. It is deciding what outputs are acceptable, defensible, complete and ready to ship. Value exists in noticing when an automated process is drifting. And it is owning the outcome when something goes wrong.
At the individual level, this changes what competence looks like. Value accrues less to those who can produce outputs quickly and more to those who can - at pace - specify intent, review work critically, recognise failure when they see it and know when to escalate.
At the team level, a high rate of throughput becomes table stakes. Reliability at speed separates high from unacceptable performance.
At the organisational level, this is - therefore - a cultural inflection point. If judgement capabilities are not nurtured and encouraged to flourish alongside automation, the adoption of digital intelligence becomes irrelevant. Mistakes will be made. Exceptions will pile up. The pace and quality of the organisation will not improve. Without training and providing humans with the cultural support necessary, the AI systems that were fast in isolation become brittle in production.
Getting this right is vital to beating the Productivity Clock. Because, it turns out, human judgement is what turns AI actions into defensible gains, scalable across the organisation.
Rebuilding workflows
The second choice is whether to rebuild workflows or simply decorate them.
Many early AI deployments succeed by automating parts of existing processes. A report is drafted faster. An analysis is generated more quickly. A decision is supported with more information. These gains are real. But they are likely to be limited.
Rebuilds take a different approach. They redraw the boundary between human and digital system deliberately. They decide what must be stable, what can change continuously, and where judgement sits. They reduce handoffs between human and digital intelligences, clarify ownership and make evaluation part of the process rather than a bolt-on to AI output.
For individuals, this means treating AI as a component you can redesign processes around, not just a tool you use. For teams, it means mapping where judgement is required and restructuring workflows and deployments accordingly. For organisations, it is often the difference between making small incremental gains, and larger ones that can be scaled safely.
Because the payoff is not just speed. It is stability. And stability is what allows gains to be carried forward into forecasts rather than staying trapped in anecdotes.
Treating AI as infrastructure
The third choice is whether AI remains a tool or becomes infrastructure.
Tools can be informal and user-led. Appropriate in many situations. But infrastructure is necessarily accountable and governed. The difference matters because productivity can only be scaled reliably without undue risk - and be shown to make an impact at the enterprise level - when it enters systems of record.
And when AI becomes infrastructure, new requirements appear. Ownership must be defined. Evaluation criteria must be explicit. Audit trails become essential. Escalation paths must exist.
This is where public AI benchmarks give way to narrow, task-specific evaluations. It is also where cost considerations split the stack. Cheaper deployments may be suitable for bounded tasks. Trusted systems become mandatory where risk is material.
In other words, AI needs to be part of infrastructure to show up in budgets, procurement and operating metrics. They are what allows belief to be weighed. If AI isn’t part of infrastructure by the time the Productivity Clock chimes, then the value of the technology will be difficult to prove.
Disposable software and two speeds
The fourth choice which decides whether AI is successfully adopted concerns how organisations respond to the falling cost of building software.
Vibe coding has made it economically rational to create small, specific tools for short-lived needs. This is a genuine shift. It allows individuals to remove friction quickly and teams to experiment without waiting for central capacity.
The risk is proliferation. Unbounded tools create shadow systems that cannot be audited or maintained. The opportunity lies in separating these from infrastructure. Organisations that succeed here accept a two-speed model. Some tools are intentionally disposable and tightly bounded. Others are hardened, monitored and governed.
For individuals, this means building small tools for repeated tasks while respecting boundaries. For teams, it means agreeing what is experimental and what must be stabilised. For organisations, it means creating a clear path from prototype to governed system.
The benefit is speed without loss of control. The cost of ignoring this choice is a growing backlog of unowned artefacts that slow explanation and increase risk.
Skills as a moving target
The fifth choice is how skills are understood and developed.
AI has turned capability into a moving target. The valuable skill is no longer mastery of a static tool, but the ability to recompose workflows as tools change. That ability varies widely. The gap between those who adapt continuously and those who do not widens quickly, even when access to tools is equal.
At the individual level, this favours people who cultivate judgement, taste and learning speed. At the team level, it favours groups that share artefacts, review work openly and reduce dependence on a few power users. At the organisational level, it favours structured learning tied to outcomes rather than credentials.
This shift is amplified by multi-modality. As AI moves beyond text into images, voice and system interaction, more roles are affected. Skills decay faster. Learning loops matter more.
Organisations that cannot sustain learning also cannot sustain gains.
Closing the loop
Taken together, these choices form the conversion pathway.
They determine whether local improvements harden into organisational capability, whether speed survives scrutiny, and whether intelligence becomes legible enough to justify continued investment. They also determine where value concentrates and where friction accumulates.
The Productivity Clock is the external forcing function. It does not mandate any particular outcome. It simply measures what these choices produce. By the middle of 2026, we expect the results will be visible in guidance, budgets and operating metrics.
The next part of this series turns to the limits that remain even when the right choices are made.







