WHAT WILL THIS YEAR BE, REALLY?
- Jan 6
- 6 min read
Updated: Apr 5

Never mind the noise, let's find the signal
If you’ve been taking heed of the slew of commentary of the last few weeks, you’ll be acutely aware that accelerationists believe it’s going to be the year that:
We all become vibe coders and use agents to get our work done;
Enterprises face the reckoning of integrating AI meaningfully - or accepting their terminal fate;
The economy looks even more ‘K-shaped’ - those already prospering will accelerate far more quickly, whilst those struggling get left a lot further behind; and
There’s potential for an AI backlash which reverberates economically, socially and politically.
All these predictions are worthy of debate. But they are also - in the main - tactical hops and pops. Do they really help us understand the direction of travel, the underlying metatrends that will write the sweep of history this year? What happens if we go deeper and try to grasp the lasting impact 2026 is likely to leave? How this year be perceived in 2030? Like every good techno-economic-socio hypothesis rooted in reality, the answer to those questions needs to start in a much earlier, far simpler time.
In this instance, February 2024. When Klarna announced what sounded like the future arriving on schedule.
The Swedish fintech said its new AI customer service assistant had handled 2.3 million conversations in its first month, covering roughly two-thirds of all customer service chats. The system spoke 35 languages. Average resolution time dropped from 11 minutes to two. Executives claimed the tool was doing the work of 700 full-time human agents.
This was not framed as a lab experiment, a digital innovation programme or a proof-of-concept quietly running in parallel. It was a production-grade shift in running a critical function.
For leaders accustomed to signing off systems that touch customers, regulators or balance sheets, it was immediately clear that Klarna was not talking about potential. It was talking about replacement.
Sebastian Siemiatkowski, Klarna’s co-founder and chief executive, described the rollout as a glimpse of what modern organisations would soon become: leaner, faster and structurally re-shaped around machine execution. Klarna estimated the move would improve annual profits by $40 million. The company had already frozen hiring outside engineering roles and allowed attrition to shrink its workforce by more than 20 percent.
The logic was unambiguous. And the numbers were stark.
Klarna looked like evidence that the promised transformation of work - even in complex, regulated industries - was finally becoming operational.
But then, slowly and quietly, the narrative changed.
By the spring of 2025, Siemiatkowski began to soften his language. In interviews, he acknowledged that Klarna had ‘over-indexed’ on AI. While the bots were cheap and fast, the quality of service was slipping in ways that mattered. Customers dealing with complex disputes struggled to reach a human. Escalation paths were unclear. Edge cases accumulated.
Klarna began to re-hire customer service agents.
Not a return to the old model. Instead, the company introduced a flexible, remote-first human workforce designed to handle cases where judgement, context and accountability could not be safely automated. AI remained central, but it no longer operated alone.
What Klarna discovered was something familiar to leaders in highly governed environments: Throughput should not be mistaken for productivity.
Automation can succeed on the median case and fail at the edges. Cost reduction, on its own, is an insufficient organising principle for systems that must withstand scrutiny, handle exceptions and preserve trust over time.
For organisations that operate under regulatory oversight, contractual obligation or public accountability, the nuance which emerges from complexity has to be baked into structure.
It is, to a large extent, why Brightbeam exists. Our understanding of the models and what they can safely achieve when faced with the underlying tasks inside complex sectors, is the starting point of our competitive advantage. And with that in mind here’s our surface-level read of 2026:
The signs of progress will indeed become even more unmistakable. Model capabilities will advance at a pace that would have seemed implausible two years ago. Vendors will continue to promise end-to-end automation. They won’t deliver entirely but the cost of effective intelligence will plummet.
Alongside these signals, a deeper set of questions will emerge inside the boardrooms, operating committees and audit reviews. As well as inside the minds of investors.
Where, precisely, is the value being realised?
Which gains are durable rather than ephemeral?
What risks are being introduced at the edges?
Who is accountable when automated decisions fail?
Complete answers to these questions only surface once systems are embedded into real operations, under real constraints, with real consequences. And given that not all organisations are at that point, the discussion of digital intelligence in 2026 has been somewhat misframed.
What will likely unfold this year is neither a clean breakthrough to AI utopia nor an overwhelming backlash.
The enduring stories told of 2026 will likely hinge on the conversion problem. The problem of turning AI potential into AI advantage by making digital intelligence second nature.
Everyone agrees that the last three years have delivered an extraordinary expansion in what machines can do. And that organisations and economies have struggled to convert it into broad, defensible productivity. At least without creating new forms of fragility.
And we also know why. Digital intelligence is advancing exponentially. But the underlying technological, organisational, societal, political and economic systems it is being deployed into are not. Their rate of change and response are far more constrained by time, structure and choice.
Enterprises evolve through annual planning cycles, regulatory reviews, training programmes and organisational inertia. Labour markets adjust slowly. Governments and infrastructures are slower still. The result is a growing tension between the speed of technical possibility and the pace of institutional change.
Which is why, in 2026, we will see this tension play out and the world it shapes will become more visible. And whilst it is an utter simplification, there is an underlying truth in the idea of the K-shaped economy. A truth which is also somewhat fractal, repeated on at least four levels. We can expect a bifurcation between those capturing disproportionate value from AI and others struggling to realise any.
It will be true of the global economy - the US and China will likely benefit as other nations stagnate.
We will see it within organisations too. Disproportionate success will come to those like Klarna who have grappled with production issues and found a way through.
Zoom in to those successful enterprises and the teams which continue to evolve and master digital intelligence will be ever-more starkly contrasted with the others who are falling behind.
Which points to the fourth level as well - to each of us as individuals. Despite access to the same tools, the gap between those of us who can direct and evaluate machine output - and those who remain focused on execution - will continue to widen. And the camp we sit in will become increasingly obvious to everyone around us.
However, want some good news? The position an entity occupies on the K is not destiny. It is an emergent property of how our existing systems are responding to abundance. And wherever a nation, enterprise, team or individual lies on the curve, the window of opportunity to improve remains. For a little while longer at least.
But 2026 may be the year in which the window of abundant and easy opportunity closes. The market appears close to the point where it stops voting for the likely outcomes of digital intelligence - and instead decides there is enough data to weigh the mid-term consequences more precisely.
Because, during H1, many large organisations will have been running serious AI initiatives for close to two years. Even though enterprises move linearly rather exponentially, 24 months is long enough for productivity claims to move from slide decks to earnings reports.
And if those gains do not arrive, the environment will change. Investor patience will narrow. Narratives will have been tested against results and found wanting. Assets will be repriced. But if they do arrive? Then the world will continue as before. Backed by hard evidence.
Which may well be the long-term impact of 2026. It may be seen as the year in which the early impact of digital intelligence became verifiable. Or not. The year in which the question shifted from ‘what can AI do?’ to ‘what has AI changed?’ The year that shaped investment, regulation and organisational design for the rest of the third decade of the 21st century.
The chapters of this essay which follow in the coming days and weeks are written in that context.
They are not a forecast of technological milestones. They are an attempt to clarify what 2026 is testing, where the pressure points lie and why the next twelve months may provide answers to questions we’ve been asking for the last three years.
To do that, we'll need to be precise about what, exactly, is about to be measured.
And that is the role of the Productivity Clock.







