top of page
BB White and Orange.png

THE ORGANISATIONAL PHENOTYPE

  • Jan 20
  • 6 min read

How cheap intelligence reveals cultural truth


Our previous chapter framed the problem of AI adoption as a set of choices. Because in choices is where agency lives. And where responsibility can be assigned. And where the potential of AI either converts into real gains, or leaks away.


Even so, limits remain. Even when the choices are sensible. Or indeed optimal.


And these limits sit in places less accessible to rapid change: inside strategy, inside tooling, inside programme plans. They sit inside the organisation’s default behaviour.


In biology, the word phenotype describes what you can see - the set of observable traits of an organism. It is the result of the interaction between its genetic makeup (genotype) and the environment in which it exists. The phenotype includes physical appearance, biochemical properties, physiology and behaviours.


Organisations have a phenotype too. It shows up in many ways, including:


  • How decisions are made

  • How work is reviewed

  • How exceptions are handled

  • How ownership is assigned

  • How truth travels when it is inconvenient


Call these cultural traits if you like. Call them operating norms. Call them the organisation’s muscle memory. Whatever the label, an organisation’s phenotype will, this year more than any before, strongly influence whether digital intelligence becomes an advantage. Or an unusually expensive way of creating more things to argue about.


Why phenotype matters more once output becomes cheap


In the early stage of a technological shift, organisations can hide behind effort.


When work is expensive, slow and labour-intensive, many failures look like busyness. Ambiguities can look like complexity. And weak decisions are easy to lose or disguise.


Then cheap intelligence arrives.


And suddenly, drafting is easy. Summarising is easy. Coding is easier. Producing options is embarrassingly easy. And that changes the location of the bottleneck.


Because the moment output becomes cheap and plentiful, the scarce resource becomes the judgement to know when this is the version we stand behind.


Which leads us to conclude that the organisation’s phenotype becomes the limiting reagent. Digital intelligence is more flexible and bountiful than the other constraints we face.


This is already visible in microcosm. Previously, we described the now-familiar pattern where local productivity rises, but the signal weakens as it reaches the interfaces that matter: finance, compliance, audit and risk. The stall happens when nobody can responsibly attest to stability, ownership and defensibility at the point of commitment.


When output is scarce, those interfaces are occasional choke points. When output is abundant, they become frequent choke points.


A useful mental model here is “review debt”.


AI can generate ten outputs before lunch. You cannot review ten outputs before lunch. So the debt accumulates. Quietly. Politely. In shared folders and ticket queues and ‘I’ll circle back’ promises.


And once review debt exists, it behaves like any other debt. It pulls the future into the present and then demands interest.


Klarna’s lesson was never about chatbots


The Klarna story unfolded because the switch to chatbots from human agents sounded like a clean substitution. A critical function handled at scale. Large numbers. Fast resolution. The kind of claim that seems to settle an argument.


Then reality asserted itself in the unglamorous places: quality, escalation paths, edge cases, accountability. The company did not abandon AI. It rebuilt the human layer for exceptions and judgement.


That is phenotype, made visible. The organisation had to decide what counted as acceptable work, which cases were safe to automate and how humans re-enter the loop without chaos.


What Klarna learned is that, because AI works at speed, it increases the surface area of organisational behaviour.


It creates more outputs, more decisions, more opportunities for mistakes and more chances for errors to compound. That expansion is why capability can rise while confidence stays flat. It is also why ‘we saved time’ fails to translate into ‘we can put it in the forecast’. We can be busy dealing with the extra work and lack confidence in reaping the benefits.


AI tightens coupling, whether you like it or not


When AI sits at the edge, as a personal tool, its failures are mostly private. You notice. You fix. You move on. The blast radius is small.


When AI becomes infrastructure, its failures become systemic. A summary feeds a decision. A decision triggers an action. An action touches a customer. A customer triggers a complaint. A complaint triggers an investigation. An investigation triggers a new control. A new control slows the whole machine.


This is where older organisational literature becomes unexpectedly useful.


Charles Perrow’s ‘normal accidents’ thesis describes how interactive complexity and tight coupling make certain failures effectively unavoidable, even when people are competent. As organisations automate more and more work, coupling increases. The relevant question shifts to detection, containment and resilience.


A counterweight comes from high reliability organisation research. Weick and Sutcliffe describe ‘mindful organising’ as practices that keep complex systems resilient: preoccupation with failure, reluctance to simplify, sensitivity to operations, commitment to resilience and deference to expertise. That last one is crucial in AI-shaped work, because the right expert is often the person closest to the edge case, not the person highest in the hierarchy.


This connects directly to the phenotype list above.


If your organisation’s default move is escalation by rank, AI will increase your queue times.


If your organisation’s default move is deference to expertise, AI will increase your throughput without destroying reliability.


Same industry. Same outputs. Same models. Same tools.


Different phenotype? Different outcome.


Legibility is the price of scale


Scaling AI inside a complex, regulated organisation has a recurring obstacle that sounds bureaucratic, until you remember why it exists.


Boards, auditors, regulators and procurement functions require legibility. Traceable inputs. Stable definitions. Clear ownership. AI complicates this because its outputs can sound confident even when wrong - and its behaviour can shift as models update.


James C Scott’s work on legibility is often applied to states, but it bites hard in enterprises too. The centre wants simplified representations so it can govern from a distance. The cost is loss of local practical knowledge, the stuff that does not compress neatly into dashboards.


This produces a genuine design constraint.


Too much legibility and you create a brittle bureaucracy that kills learning. Too little legibility and AI stays trapped in local experiments, because nobody senior can sign off.


The fit organisation builds enough ‘proof machinery’ to clear gates, while keeping room for frontline discretion and a sane exception process.


That is phenotype again. It is also why our discussion of ‘AI as infrastructure’ matters so much. Infrastructure demands accountability, explicit evaluation criteria, audit trails and escalation paths. Without those, belief stops clearing investment gates.


Selection pressure enters through boring doors


When people talk about ‘AI disruption’, they often picture a dramatic front door: a mass layoff, a sweeping automation announcement, a sudden pivot.


In real organisations, selection pressure enters through the side doors. Finance selects for things that survive forecast scrutiny and budget gates. Risk and audit select for traceability, repeatability and ownership. Customers select for reliability and escalation paths that still work when everyone is tired and the system is under strain.


Meanwhile, labour markets select for teams that make judgement teachable rather than hoarded. Vendors select for workflows that map cleanly onto their product shape. Regulators and litigators select for explainability after things go wrong.


And financial markets select for narratives that can survive contact with numbers.


These forces do not care about your internal enthusiasm. They care about whether the system can be defended when it matters.


This is why ‘the organisation learns whether it can change itself’ is the quiet subtext of the whole Productivity Clock argument. Two years is often enough time for a pilot to either harden into capability, or collapse into a permanently interesting experiment. AI projects struggle when ownership, evaluation and the definition of ‘good enough’ remain unresolved at the point of production.


The hidden twist: AI makes culture measurable


Culture, of course, is famously slippery. Everyone talks about it. Few people can point to it without resorting to slogans or folklore.


Cheap intelligence changes that.


Because when output becomes abundant, the organisation leaves traces. You can see how many drafts are produced per decision. You can see how long reviews take. You can see how often exceptions occur, and whether they cluster around particular workflows.


And you can see whether escalation paths work, or whether they create a polite traffic jam. Whether ‘deference to expertise’ is real, or whether it is just a line in an onboarding deck.


In other words, you can see the phenotype. The organisation’s real habits stop hiding behind effort. They have to stand in daylight.


The bridge to conclusion


Our previous chapter argued that conversion depends on choices. That remains true.


This chapter added the next layer: the understanding that choices operate inside an environment that selects for certain behaviours and punishes others.


The Productivity Clock is the forcing function that turns this from an internal debate into an external verdict. By the middle of 2026, we expect the results to show up in guidance, budgets and operating metrics.


And once that happens, something more interesting than “some win, some lose” begins to emerge.


Selection will not produce a single ‘best’ organisational form. It will produce viable forms that diverge.


So in our final chapter we’ll name two organisational species we expect to become more common in a cheap-intelligence environment. And explain why their traits reinforce themselves in ways that make copying difficult.


Ultimately, we think its the most defensible way to describe what 2026 will do to organisations as the Productivity Clock starts ringing.


 
 
BB White and Orange.png
Get in touch bubble roll.png
Get in touch bubble.png
Button overlay.jpg

Home

Further reading

Careers

Contact us

BB White and Orange.png
bottom of page