THE CAPABILITY'S HERE. IS ENTERPRISE READY?
- Feb 10
- 5 min read

A little more than a month ago, we published our Productivity Clock hypothesis. It holds that, for two years, markets have been pricing GenAI on narratives. And that, as a result, 2026 is going to feature a lot of actual measurement.
If AI is found to be sufficiently productive, there will have been no bubble. But if enterprise uptake - and therefore spend - is slower than currently forecast, investors will not be able to justify current prices with the updated future cash flows of many AI-based stocks. Which will lead to a significant correction in prices.
Two weeks later David Solomon, CEO of Goldman Sachs, emerged as a supporter of this theory. He used his appearance on Prof G Markets with Scott Galloway to suggest enterprise deployment would prove harder and slower than people expect, and that 'sometime in 2026' there could be a general realisation that getting AI into enterprises is more complex than first assumed.
But he also nailed the tension. On AI within Goldman itself, he framed the challenge as reimagining processes to create 'enormous operating efficiencies' - doing things 'with fundamentally less people' - while freeing up capacity to invest in growth.
So will the complexity win? Or might the sheer potential of the technology shine through? Here’s what we’ve learned this year so far.
The capability overhang is larger
There can be no doubt that, since December, the potential for ‘capability overhang’ in every enterprise has shifted into overdrive. The surface area of opportunity - the things human intelligence can do with digital intelligence - is greater than ever. And, it seems, continues to ramp up at an exponential rate.
The biggest wins this season were felt by software engineers.
Both Anthropic’s Claude Code and OpenAI’s Codex now understand larger codebases, are better at making complex updates, debug with fewer loops and generally behave less like talkative interns and more like junior engineers. Ones who can type at the speed of electricity.
It was enough to trigger many commentators to declare the ‘Agentic Age’ had arrived. Almost a year after it was first advertised - but still. Present and now correct. In fact, most view Claude Code as an inflection point as meaningful as the launch of ChatGPT in November 2022.
On the strength of a Christmas spent vibe coding - and a month continuing to build whilst back at work - use of AI is now expected to move away from the simple prompt and response format. To the orchestration of agents.
This may even become the primary method for interaction with computers in general. Claude has shown that once agentic AI has access to your computer, its files and tools they can nestle down, make themselves at home and get stuff done - all at the behest of the human in control. Simple commands lead to complex outputs - a well-researched and nicely-presented Word document, a website or an app that you can launch in Apple’s app store. Brightbeam’s David Downes built this in a few hours.
More than a coding tool
And the non-technical productivity geeks paying attention haven’t been left out of this festival of AI either. The launch of Claude Cowork on Mac desktop has given non-technical users a system which provides them access to local files, can use agents and sub-agents to get work done and will chunter happily away for hours before needing review and intervention. And given Claude Code is just a tab away in the interface, it has also given birth to a new generation of citizen developers. The real kicker? Cowork was built using Claude Code in around a week and a half.
All of which is before we mention the rapid adoption of Claude Skills by both techies and the layperson. And then Claude Plug-ins - vast arrays of skills bundled to make vertical workflows execute more convincingly - from legal contract revision to finance, sales and many forms of marketing. All of which means Anthropic is building an army of white collar workers inside a tool you can hire for €18/mo. And the realisation that this was a step-change wiped out a quarter of a billion in stock value off software providers. In a single session Thomson Reuters plunged over 15%, LegalZoom nearly 20% and Salesforce is down 26% year-to-date, the Dow's second-worst performer.
Further per-seat licensing disruptions are expected. Because Anthropic has built the scaffolding for autonomous agent teams that assist entire workflows. Which necessarily triggers a recalculation of what per-seat SaaS licensing is actually worth. Agents can replicate much of the core functionality at zero marginal cost. The moat that protected legacy SaaS - proprietary data, switching costs and embedded workflows - suddenly looks rather shallow.
Digital Intelligence is cheaper than the proverbial peanut
Worse still for software providers, the cost of AI continues to move in the opposite direction. Frontier access is now cheaper overall, pushed down by cloud pricing pressure and falling effective compute cost per task. Whilst there are exceptions the trendline remains clear: more work per token, more tokens per Euro.
The new models have improved reliability too - fewer hallucinations claims, better task performance, better instruction-following. They remain stubbornly imperfect, of course. And in practice it keeps complex use cases human-supervised, especially where errors carry operational, legal, safety or reputational cost.
But autonomy length keeps on advancing - with longer chains of actions and better memory. Even if enterprise adoption of these capabilities, outside the developer pool, hasn’t obviously taken any major leaps. But what impact will this have on the Productivity Clock? Here are three potential scenarios:
1. The longer clock
We might see that, as capability keeps sprinting, capital keeps funding - even though enterprise adoption remains throttled by systems-of-record integration, change control difficulties, procurement stubbornness, liability worries, audit trail failures and plain-old organisational metabolism. We could expect Cowork and plug-ins to make adoption easier, but not safe enough for regulated workflows. And yet, given the massive - and growing - capability overhang investors may keep the faith because the tools clearly work. Even if only developers and early adopters can consistently extract value. This cannot stave off the market becoming a weighing mechanism forever. But the buying on the strength of future expectations and narratives might continue for a while longer.
2. Continued selective repricing
The repricing of SaaS stocks is the Clock exerting pressure sideways. Given the capability overhang - and the expectation of feature adoption - the future cash flows of other businesses now look less positive. Software spend will be replaced. This may indicate how the Clock plays out more generally. The firms that can convert AI capability into demand for models, infrastructure, tools and governance layers will be judged well-valued. The others may see their valuations fall like a stone until they can provide the evidence of adoption. In other words, the market may deflate rather than wholesale burst. This would be efficient capital allocation.
3. Investors lose faith
Given the growing capability overhang investors decide the potential of AI is becoming unabsorbable by enterprise. Capability is rising, but if conversion continues to seem to stall, the market’s willingness to pay for tomorrow’s cashflows with today's capital may evaporate.
This scenario is now easier to imagine because the tools are plainly disruptive and yet the evidence for adoption is making far fewer headlines. If the market concludes integration will stay stuck capability for longer than expected, most valuations will reprice down to match.
Of course, this leaves an open question. What evidence of enterprise adoption have we seen in the first weeks of 2026?
We will examine this in our next chapter, as we continue to work out how the Productivity Clock will impact us all.







