HOW ENTERPRISES ADOPT AI: PART 2
- Mar 3
- 7 min read
Updated: Apr 4

Training for coordination - as well competence
Most enterprise AI training programmes are designed to answer one question: how do we make individuals competent with AI?
The Schelling Point Framework asks a different question: how do we create the conditions that means everyone adopts AI within this organisation?
Both matter. But its the second is what determines whether adoption scales.
In Part 1 we explored the experimental framework, explaining why focal points are key to widespread adoption. Here, in Part 2, we provide a clear guide on folding those insights into training programmes.
The five conditions for a Focal Point
The research points to five conditions that must be met for an enterprise to reach its 'Schelling Point' for AI adoption. And they operate as a system, not a sequence. Our experience strongly suggests that the most effective training programmes create all five simultaneously. And, in fact, any missing condition can prevent the focal point from forming.
Principle 1: Reduce options
Research has demonstrated that the more plausible ways of using AI that are offered to teams, the lower the levels of general adoption.
This may feel counterintuitive at first. If you're optimising for 'best,' surely more strong candidates should help? But, it turns out, coordination doesn't optimise for best. It optimises for most shared and commonly understood.
Each option, each new choice, each additional piece of cognitive weight, makes convergence of behaviours - and therefore widespread adoption - less likely.
So what does this mean for training? Stop presenting the full landscape of AI possibilities. Don't run sessions on '15 Ways AI Can Transform Your Workflow.' Don't offer a toolkit of six platforms and let people choose. The optionality feels generous. It's a coordination destroyer.
We have clients who, before working with us, have more than 3,500 'How-to-AI' videos available on learning platforms. And adoption rates that are close to zero.
The remedy is prescription. Prescribe a single primary tool - not because it's objectively the best, but because shared adoption of one tool creates a focal point that shared awareness of six tools cannot. If the organisation later migrates to a better tool, the coordination pattern transfers.
Then, prescribe specific, narrow use cases. Not 'use AI for communication' but 'use Claude to draft the first version of your weekly status update.' The specificity is the prominence. But it must also be general enough that two people in different departments, doing it independently, would recognise the same practice.
And sequence, don't parallelise. One use case per training cycle, mastered and visible, before introducing the next. Parallel introduction of multiple use cases splits attention and prevents any single practice from becoming focal. Each new use case should build on the last - extending the pattern, not competing with it.
Principle 2: Build shared mental models
Shared mental models - especially those which inform how to do tasks and how to work together - are necessary for high-performance teams. Thinking about things in the same way matters more than knowing the same facts.
The distinction is important. The goal of training isn't just that everyone knows the same things about AI. It's that everyone thinks about AI in the same way within an organisation. A shared vocabulary, a shared framework for when and how to use AI, and shared expectations about what great - and unacceptable - AI-assisted work looks like.
Building this starts with vocabulary. Every training programme should establish a small set of terms that become the organisation's common language for AI use - not generic terms imported from the industry, but specific labels that carry meaning within this organisation. When a team in finance and a team in operations use the same phrase to describe the same practice, the error bars shrink. Shared language is the cheapest coordination mechanism available.
Cross-functional cohorts accelerate this. If training is delivered by department, each department develops its own mental model of AI use. These models may be individually excellent but mutually incompatible. Cross-functional cohorts force shared mental model development across organisational boundaries, which is where coordination failures most often occur.
But the literature draws an important distinction between taskwork and teamwork mental models - and training should address both explicitly. Taskwork training - what AI can do, how to prompt it, what the risks are - is what most programmes deliver. Teamwork training asks different questions: how does AI-assisted work flow between people? Who verifies AI output? How should AI-generated content be labelled? What happens when AI produces something wrong? These aren't capability questions. They're coordination questions. Most programmes skip them entirely. And without them, the Schelling point can't form.
Finally, we must remember that shared experience is the best way to build shared mental models. Instruction produces knowledge. Shared experience produces coordination. Training should include exercises where participants use AI together, see each other's approaches - and develop shared expectations through practice rather than instruction. The doing is the teaching.
Principle 3: Create common knowledge
Coordination doesn't only need shared information, it requires common knowledge. Public, face-to-face communication creates common knowledge in ways that private channels cannot. Moreover, if I know that others are aware, coordination is strongly promoted.
So while closed, private channels create information, public channels create common knowledge. Knowing that distinction - and applying it to AI training - is vital.
The coordination-critical moments are the public ones - visible demonstrations, shared workshops, all-hands moments where everyone can see that everyone else is seeing the same thing.
Every training programme should include multiple shared moments: public events where the use of AI becomes common knowledge across the organisation. This could be a leader demonstrating their AI workflow in an all-hands, a 'show and tell' session where teams present AI-assisted work or a challenge where everyone works on the same AI task simultaneously and can see each other's results.
The content of the event matters far less than its publicness.
Individual self-paced e-learning modules don't create common knowledge. If all training happens privately, each person learns that they can use AI but doesn't know whether their colleagues can, whether their manager approves, or whether the organisation actually expects it.
Recurring rituals are then necessary sustain practice - weekly 'AI wins' sharing, monthly demonstrations, standing agenda items in existing meetings. The ritual itself becomes a Schelling point: 'This is where we talk about how we use AI.' Rituals are cheap. Their coordination value is utterly disproportionate.
For instance, Novartis CEO Narasimhan doesn't just sponsor AI - he publicly uses AI agents for decision support and earnings preparation. His visible practice doesn't inform the organisation about AI capabilities. It creates common knowledge that AI use is the expected behaviour. One public demonstration by a senior leader creates more coordination value than a hundred private training sessions. Leaders shouldn't just be sponsors. They're far more than that - they need to be common knowledge amplifiers.
Principle 4: Group identity and the we-frame
Coordination succeeds when individuals shift from I-mode: 'what should I do?' - to we-mode: 'what should we do, and what's my part?'
And we know that this shift is triggered by the salience of a social situation, not by instruction or incentive. You can't tell people to adopt a collective identity around AI use. But you can design training that makes the group's identity include: 'we all use AI like this'.
The mechanism is straightforward. Self-paced learning produces I-mode reasoning: 'I am learning to use AI.' Cohort-based training produces we-mode reasoning: 'We are becoming an AI-capable team.' The cohort becomes the group with which people identify, and the group's practices become the focal point.
Giving cohorts an identity strengthens this. Name the cohorts. Give them a shared challenge. Create visible markers of membership - a Teams channel, a shared workspace, a ritual. Each training cohort that develops a strong internal identity becomes a node of coordination that the wider organisation can converge around. Identity precedes coordination.
The framing of AI adoption matters for the same reason. 'We're becoming an organisation that works with AI' activates team reasoning. 'You should learn to use AI' activates individual reasoning. The framing determines which cognitive mode people bring to the coordination problem. Collective framing makes the Schelling point easier to see because people are looking for what we should converge on, not what I should learn.
And stories - specific stories about 'how we use AI here,' featuring named colleagues and real work products - create the cultural context that Schelling identified as essential for focal points. External case studies don't build group identity. Internal stories do. They convert abstract adoption into concrete, shared, organisational experience. Once there are enough of them, the focal point has a narrative. And narratives persist.
Condition 5: Make early adopters count
Adoption tipping points depend on the volume and visibility of early adopters - far more than the percentage of the organisation that has been trained. The social proof literature confirms that people adopt behaviours they can see peers performing successfully. Not behaviours they've been told about. Behaviours they can see.
This means the sequence in which you train people matters more than the total number you train. Training the 'right' 15% of the organisation first - visible, cross-functional, influential individuals - creates a tipping point. Training 100% in a random sequence creates no focal point, because the early adopters are scattered and invisible to each other.
The first training cohort should not be volunteers, enthusiasts or the IT department. It should be visible practitioners - people whose work is seen by many others, who span organisational boundaries and whose adoption will be noticed. One trained team lead whose AI-assisted work product is visible to 50 colleagues creates more coordination value than five trained individuals working in isolation. Visibility multiplies.
This also means density should be concentrated before breadth is pursued. Train a critical mass within a single team or function before expanding to the next. A team where eight out of ten people use AI in the same way becomes a visible node that others can observe and imitate. Spreading training across ten teams with one person per team creates no visible pattern. The distribution looks generous but it turns into a coordination failure.
Trained individuals should have platforms, not just skills. Internal showcases, shared documents showing AI-assisted work, 'AI office hours' where trained practitioners help others. The champion's job isn't to evangelise AI. It's to be visible using AI. Visibility creates the preferential attachment that pulls others toward the focal point.
And the relevant metric isn't 'percentage of organisation trained.' It's 'percentage of organisation that can see AI being used by peers in their immediate environment.' An organisation with 20% trained but concentrated in visible clusters is closer to the tipping point than one with 50% trained but evenly distributed and less visibility. Coverage feels like progress. Density isprogress.
What does this all mean?
Based on the principles above, many traditional enterprise training patterns are found wanting. But embrace them and adoption becomes an achievable, repeatable and scalable objective. Which, given the importance of AI use, is something of a relief. It means that current organisations, even large ones, can become AI-native in outlook and operating model.
In Part 3 we'll outline what those training programmes look like - and how our own practice reliably delivers Schelling Points to clients.






