HOW ENTERPRISES ADOPT AI: PART 1
- Feb 24
- 10 min read
Updated: Apr 5

A research-grounded approach to training, coaching and culture change
The Core Thesis
There is no question that Enterprise AI is an information problem - people need to know how to use AI. But is that sufficient? Is it also a coordination problem, a cultural issue? Does it need to be a shared experience - common sense even?
In our experience, most training programmes don't think this through, confining themselves to educating individuals about AI's capabilities.
At Brightbeam, however, we contend that this approach is insufficient. It ignores the human and collective prerequisites for 'mass' adoption.
In the bigger scheme of things, few new skills and practices ever take hold across an enterprise. Not least because, to do so, hundreds of people must independently converge on compatible behaviours. And whilst 'the business' can guide how people work, no single authority is able to dictate exactly what those behaviours will be. Which is why Accenture has recently made AI use a prerequisite for promotion.
If, however, your corporate culture doesn't allow for such direct and bald incentive structures, how can your colleagues be encouraged to adopt AI? Even if you could enforce, you may well believe that organic adoption might be more positive overall than mandate-and-sanction approaches.
For one better answer, we look towards Thomas Schelling's focal point theory - developed during Cold War nuclear strategy and validated across six decades of experimental research. It provides a rigorous framework for understanding why some things are adopted en mass - whilst others are not. It also shows us how to engineer the conditions under which adoption occurs. Allowing us to load the dice in our favour. Without resorting to cajoling or coercion.
Our experience at Brightbeam confirms this is not only possible - but desirable and repeatable. And that once a focal point has formed, adoption becomes self-sustaining. L&D teams report no longer 'feeling like we're pushing water uphill'.
Which is satisfying for all concerned.
The Research Foundation
1. Focal Point Theory - The Original Insight
Thomas Schelling introduced focal points in The Strategy of Conflict (1960) to address a specific puzzle: how do people coordinate their actions when they can't communicate? His famous experiment asked students where they'd meet a stranger in New York City with no prior arrangement. Most converged on noon at Grand Central Terminal - not because it offered any practical advantage over a thousand other locations, but because its cultural salience made it the obvious choice for anyone reasoning about what the other person would choose.
Schelling identified two properties that make a focal point work: prominence - the option stands out from alternatives - and uniqueness - it avoids ambiguity. A focal point doesn't need to be 'best.' It needs to be shared knowledge that is obviously different from everything else.
The deeper insight, and the one most relevant to enterprise coaching, is that focal points depend on a common cultural context. Schelling observed that it depends on the relationship between the people trying to coordinate, their common experiences and their shared frames of reference. Judith Mehta's later experimental work under controlled conditions with monetary incentives confirmed this: when participants had incentive to coordinate, they shifted from personal associations to culturally salient choices.
This tells us something important for training design. The raw capability to use AI isn't what creates coordination. What creates coordination is a shared understanding of how AI fits within a particular organisational context - a shared insight that everyone knows and believes is true.
Sugden and Colwell's analysis further clarified that focal points operate through recursive reasoning: 'I choose X because I think you'll choose X because you think I'll choose X.' This recursive quality means focal points are self-reinforcing once established, and self-defeating when absent. There's no middle ground. Either the coordination pattern catches, or it doesn't.
Which means one business may try to adopt superior technology and fail. Whilst another creates a focal point with inferior technology and succeeds.
2. Common Knowledge - The Emperor's New Clothes Problem
Research by Thomas et al connects focal points to a deeper mechanism: common knowledge. Coordination requires that everyone knows the relevant information, everyone knows that everyone knows, everyone knows that everyone knows that everyone knows - and so on, infinitely. Mere shared information isn't enough. Pinker et al demonstrated that people's sensitivity to common knowledge is ubiquitous in social life, appearing across diverse cooperative opportunities.
Michael Suk-Young Chwe, in Rational Ritual, makes this concrete: 'Successful communication sometimes is not simply a matter of whether a given message is received. It also depends on whether people are aware that other people also receive it.'
This maps directly to the enterprise training context. A CEO sending an email about AI strategy creates shared information. An all-hands meeting - where the CEO demonstrates using AI in front of the whole organisation - creates common knowledge. The difference isn't the content of the message - it's whether everyone can see that everyone else has seen it.
The Koessler research on common knowledge and interactive behaviours confirms that face-to-face communication and public forums create common knowledge in ways that private channels cannot. When people can verify that others have received, understood and acknowledged the same information, coordination becomes possible. Private Slack messages, individual training sessions and one-to-one coaching conversations don't create common knowledge. Public demonstrations, shared workshops and visible practice do.
The emperor's new clothes story - a coordination game - illustrates the fragility. Everyone privately thought the emperor was naked, but nobody could act on that knowledge because it wasn't common knowledge. The boy's exclamation changed nothing about what anyone believed. It changed everything about what everyone knew that everyone else knew. In enterprise AI adoption, the equivalent is the moment when using AI stops being something people do privately and becomes something everyone can see everyone else doing.
3. Shared Mental Models - The Cognitive Infrastructure
The shared mental models (SMM) literature, synthesised across decades by Cannon-Bowers, Salas et al, provides the cognitive science foundation. SMMs are the overlapping knowledge structures that enable team members to anticipate one another's needs and coordinate their actions without explicit communication.
Mathieu et al, testing 56 dyads in flight-combat simulation, established that both task-based and team-based shared mental models relate positively to team process and performance. Team processes fully mediated the relationship between mental model convergence and team effectiveness. This means shared understanding doesn't directly improve performance - it improves performance by improving the quality of coordination.
Stout, Cannon-Bowers, Salas, and Milanovich found that effective planning increased the SMM among team members, allowed them to use efficient communication strategies during high-workload conditions - and resulted in improved coordinated team performance. Planning doesn't just produce plans. It produces the shared cognitive architecture that makes coordination possible when plans break down.
The SMM literature draws an important distinction between (i) taskwork models - shared understanding of how the task works; and (ii) teamwork models - shared understanding of how the team works.
Both matter, but for coordination they serve different functions.
Taskwork models reduce the 'error bars' - people agree on what counts as good work.
Teamwork models reduce coordination costs - people can anticipate what their colleagues will do without asking.
For enterprise AI training, this distinction maps cleanly:
Taskwork SMM for AI means shared understanding of what AI can do, what it can't do, when to use it, what 'good' AI use looks like, and what the risks are. This is what most training programmes deliver - and it's necessary, but not sufficient.
Teamwork SMM for AI means shared understanding of who is using AI for what, how AI-generated work flows through the organisation, what the norms are around disclosure and verification, and how AI use affects everyone else's work. Almost no training programme addresses this, and without it, the Schelling point can't form.
DeChurch and Mesmer-Magnus's meta-analysis confirmed that the way shared mental models are measured moderates the observed relationship between SMMs and outcomes. Compositional measures - which assess whether individuals' models are similar in structure - show stronger effects than compilational measures. Translation: it matters more that people think about AI in similar ways than that they all know the same facts about AI.
Reynolds and Blickensderfer's work on Crew Resource Management in aviation reinforces this. When instruction shifted from purely technical skills to a combination with teamwork skills, coordination improved. And where coordination is high-stakes - as in aviation and in companies needing to adopt AI to deliver a new operating model - shared mental models aren't a 'nice to have.' They're the mechanism that prevents catastrophy.
The emerging literature on shared mental models in human-AI teams suggests that as organisations deploy AI agents alongside human workers, the shared mental model must now include a common understanding of what the AI is doing, how it reasons and where its outputs should be trusted or questioned.
This is uncharted territory for most organisations, and it creates a new class of coordination failure: people may coordinate well with each other but miscoordinate with the AI, or vice versa.
4. Team Reasoning - The 'We-Frame' Shift
Michael Bacharach's theory of team reasoning, developed through the 1990s and published posthumously in 2006, provides the mechanism that connects Schelling's focal points to actual coordination in organisations.
Standard game theory assumes individuals reason in 'I-mode': 'What do I want, and what should I do to achieve it?' Bacharach demonstrated that in coordination games, successful players shift to 'we-mode': 'What do we want, and what should I do to help achieve it?' This shift - which he called an 'agency transformation' - doesn't require altruism. It requires group identification: the individual conceives of the group as a unit of agency, acting as a single entity in pursuit of a shared objective - and considers themselves part of that entity.
Gold and Sugden (2007) showed that this shift is triggered not by rational calculation but by the salience of a social category. People don't decide to team-reason. They find themselves team-reasoning when the situation makes group membership salient. Hindriks refined this further: group identification is triggered by common knowledge of shared group membership, not by evidence of cooperative behaviour.
This has profound implications for training design. You can't instruct people to adopt a 'we-frame' for AI adoption. You can create conditions that make the 'we-frame' salient. Shared experiences, common language, visible group membership and collective identity all trigger the shift from I-mode to we-mode reasoning.
The experimental evidence from coordination games is clear: when people identify with a group, they coordinate on equilibria at rates that standard game theory can't predict. When they don't identify with the group, coordination failures abound - even when the 'right' answer is obvious to an outside observer.
For enterprise AI adoption, this means the question isn't 'Does each individual know how to use AI?' but 'Do the people in this organisation see themselves as a group that uses AI?' The first is an information problem. The second is an identity problem. Training programmes that solve the first while ignoring the second will consistently underperform.
5. Organisational Schelling Points - The Komoroske Model
Alex Komoroske (formerly of Google) built the most rigorous application of Schelling point theory to organisational dynamics. His model, using agent-based simulation, demonstrates several results with direct training implications.
The base case is grim. With four projects and eight collaborators who can't communicate, coordination probability is effectively zero. The more possible approaches to a task and the more stakeholders who must align, the less likely coordination happens by chance.
The solve? Add a scuff mark - one distinguishing characteristic - to a single option. You don't need the mark to make it better just different. That transforms near-certain failure into near-certain success.
Why? Because the mark creates a Schelling point - a specific, memorable, visible practice. And even if that is arbitrary, it still coordinates better than a comprehensive but ambiguous strategy.
Error bars destroy coordination.
The need to have a focal point persists even when a standout option exists. If there is disagreement about what counts as 'value' - different metrics, different time horizons, different risk tolerances - the error bars widen and convergence becomes impossible despite the standout.
This is a common story - when the CFO measures AI value by cost reduction and the CMO measures it by customer experience, there is no focal point for the organisation to converge on. And the adoption curve slows - no one's quite sure what's its meant to achieve.
Which is why 'North Stars' only work until they collapse in disagreement or uncertainty. A compelling organisational direction acts as a Schelling point - everyone sights off it. But an unconvincing one creates a supercritical state: superficial alignment masks private scepticism, vulnerable to cascade failure from a single incident. Thus, a CEO who mandates AI adoption without genuine agreeement and conviction below can only every create a fragile equilibrium. It will collapse the moment something goes wrong.
6. Critical Mass and Tipping Points - When Coordination Becomes Self-Sustaining
The tipping point literature connects Schelling's individual-level coordination mechanism to something that operates at organisational scale. Coordination games have a property that most change management models miss: a threshold.
Below it, individual adoption can feel costly. Above it, adoption seems the obvious choice.
The distinction matters because it means there is less of a gradual continuum between 'some people use AI' and 'the organisation uses AI.' than feels intuitive. This is beceause there's a phase transition. And the conditions that produce it are specific.
As Farnam Street's synthesis of the research observes, it takes only a small proportion of people changing their opinions to reach a tipping point - and this is magnified if those people carry influence. The more visible and influential the early adopters, the faster the tipping point arrives.
Schelling points exhibit preferential attachment. Each new person who encounters the focal point is more likely to converge on it. That convergence makes the focal point more prominent. Which attracts the next person. Komoroske describes a dynamic where arbitrary blips bloom into broad bright beacons that coordinate the actions of the many. The metaphor is exact. Beacons are visible. Blips are not. Visibility is the mechanism.
For enterprise training, this means the first 15% of the organisation to adopt AI practices - visibly, consistently and in a coordinated way - will determine whether the remaining 85% follow. If those early adopters use AI in visibly different ways, with different tools, for different purposes, no focal point forms. The majority waits. If they converge on a recognisable pattern, the pattern becomes the thing that everyone else coordinates around.
Crawford, Gneezy, and Rottenstreich's experimental work adds an important caveat. Even minute payoff asymmetries can cause focal points to lose their effectiveness. If different departments have even slightly different incentives around AI adoption - and they always do - the Schelling point weakens. Training design must account for incentive misalignment, not just information gaps.
The information must be shared. The motivations must also be aligned.
Thank you for reading. Join us later in the week for Part Two: Designing training for coordination, not just competence







