top of page
BB White and Orange.png

CONTEXT IS KING: MAPPING THE BATTLEFIELD

  • Oct 28, 2025
  • 11 min read

Updated: Apr 9

graphic of map kingdom of context

Who's Fighting, What They Control and Why It Matters


The frontier AI providers - OpenAI, Anthropic, Google - have spent more than two years in a capability arms race. Models that could barely pass secondary school exams in 2024 are now Gold Medal mathletes. But the platform war is shifting to different terrain entirely.


To significantly serve you better, AI models now need context - all the data, documents, conversations and activities that they currently can't see. Your emails. Your Slack threads and Teams messages. Your calendar. Your company's institutional memory, scattered across a dozen cloud-based apps. The goal? To integrate AI into every facet of your life. And remove every possible friction.


Which is why, via a flurry of recent Microsoft, OpenAI, Google and Anthropic have all confirmed the battle has moved decisively to context.


But which Big Tech player ends up King of Context? How will they access it? And what historical choices are creating advantage and liability?


Over the coming posts, we'll examine the major players pursuing fundamentally different strategies to own the context layer of enterprise AI. But before diving into each, you’ll be needing a map. Which is the purpose of this post.


The Context Aggregators: owning the work surface


Four players are trying to live where work actually happens. Control the surfaces people use every hour of every day and you earn consented visibility that no sidebar or plug-in can match.


Microsoft — the OS, the apps and the browser


At first sight, Microsoft’s play is now looking like an absolute doozie - and a potential knock-out blow. It has thrown almost all its troops forward and set the context flamethrowers to crispy. It has always sat under and around most of our work — and is now planning on taking full advantage.


Its enormous leverage comes from having fused its platform with AI. Windows is where inputs, files, apps and processes live. The 365 family, now including Teams, are the places where artefacts are authored, reviewed, discussed and approved. OneDrive and SharePoint carry organisational memory. Some people even use the Edge browser. And all now have Copilot running through the middle of them.


Which means Copilot is becoming the single assistant threaded through device, apps and the web. It is ever-present. The latest Windows 11 and Copilot releases push this integration from idea to implementation. Microsoft announced:


  • Deeper Memory: Explicitly designed to act as a "second brain" that remembers personal facts and past conversations.

  • Connectors: A direct play to break out of its own garden by linking Copilot to OneDrive, Outlook, and rivals like Gmail and Google Drive.

  • Copilot Groups: A feature for building collaborative context by turning any chat into a shared group thread for planning or brainstorming.


But this strength also brings exposure. An assistant with device-level sightlines and browser-level agency will only scale if consent is legible and limits are enforceable. Enterprises will look for controls that are obvious to users and binding to administrators — what Copilot can see, where that context is stored, how long it is retained, how to erase it, and which actions are allowed under which policy.


Any hint of silent capture or retention will stall adoption, especially after the backlash to over-board screen memory features. Microsoft is betting that convenience will ultimately win out over privacy fears. Why might fly privately. Perhaps not so much in the enterprise.


Plus, Google, OpenAI, Anthropic and Salesforce have their own troops on the battlefield. Microsoft’s play for renewed dominance in the age of AI will not go unchallenged. And other, highly persuasive, offerings are available.


Google — the browser and workspace, powered by a frontier model


The world’s only search giant is currently competing in the context space by sitting across two daily surfaces - and, unlike Microsoft, actually owning the AI that runs through them.


Chrome is the window onto most SaaS work — and enterprises less than impressed by Microsoft also deploy the only 365 competitor, Google Workspace. Which means The Big G also owns part of the admin plane. And workspace stores organisational memory — mail, documents, sheets, slides, chat, calendar and meeting transcripts. Which makes Google a highly significant part of the world's computing fabric - with Gemini threaded through this stack.


Strengths compound in organisations where the suite is primary. Workspace keeps the core artefacts under one roof, Chrome sees the web flows around them, and Gemini reduces the friction between a prompt and a final outcome. The idea, draft, summary, schedule and decision all live inside the Google-run surfaces.


Google’s limits, however, are also structural. Google does not control the device, which keeps many privileged signals out of reach. Placed on the map, Google is in the secondary context aggregator slot - with the advantage of vertical reach and the discipline of first-party governance. But can Google convert its integration into daily enterprise use before Microsoft achieves lock in for most?


OpenAI — browser-led aggregation at the web surface


With its release of a browser, OpenAI has crossed the line from pure intelligence broker to context aggregator. Its browser, if installed and used, puts ChatGPT on the surface where most work actually happens. Like Microsoft’s Edge, Atlas gets to see what you read, type and approve. It’s built to turn borrowed context into observed context, replacing copy-paste with actual visibility over the pages and flows where we spend our days.


But OpenAI’s ambition doesn't stop at the browser. Its recent acquisition of Software Applications - makers of an OS-level Mac interface called Sky - suggests it wants to move further. It appears to have a nascent strategy to compete directly with Microsoft - and gain context from what's on your screen, in your apps and on your desktop.


And its just-launched Company Knowledge is a step in that direction. It connects ChatGPT to internal Slack, Google Drive, and GitHub repositories. And is reportedly powered by a new, specialised version of GPT-5.


Which means OpenAI’s compilation of context may grow rapidly. Properly permissioned it will be able to pool company knowledge - and pull material from the most important platforms. GPT’s extended memory functions will then do the quiet work of persistence — carrying context from one session - and user - to the next.


Is it a poorly-positioned minnow against Microsoft? We need to consider OpenAI has a proven ability to change the behaviour of billions of people rapidly. And its cultural pull is considerable. Microsoft will be the IT team’s preference. But which one will most users prefer?


There are, of course, hard limits baked in. OpenAI does not own the device surface or an enterprise app suite, which means fewer privileged signals than an OS owner. And enterprise adoption will live or die on admin controls.


Placed on the map, OpenAI stands alongside the aggregators, but it arrives from the opposite direction. Microsoft fuses OS, apps and browser. Google pairs a frontier model with Chrome and Workspace breadth. OpenAI inverts the stack — it starts with interface gravity, is trying to claim the web surface, stitch in other artefacts, rely on memory to keep context live and then perform its magic.


But can a browser and connector-led aggregation — even with consented memory and credible admin controls — win enough enterprise surface? And how successful could any reach down to the OS layer be?


Salesforce — the app where decisions live


Salesforce starts where most corporate decisions actually get made — in a messaging channel.


Slack captures the live texture of work that documents record later. It sees the pivots, the escalations and the trade-offs - and keeps those threads close to the records that matter in the CRM.


And there is an argument that this is the most important context of all. Whilst there is less quantity of context than that available to Microsoft of Google, it focuses on the good stuff. The real context and nuance. The flow and immediacy of what’s actually happening - not what’s been recorded weeks ago, was invisible to most and filed in the wrong place.


Plus, the available files and CRM records give Slack conversations an anchor. Workflows and huddles keep momentum without breaking context. On-platform agents answer and act in the channel, while interoperability aims to bring outside assistants to the work rather than exporting Slack’s graph.


The result is a decision trail that reads like institutional memory — not only what was decided, but why the trade was made and who signed it off. And this is what Salesforce’s play for context is based on.


But is conversational context enough to anchor enterprise memory at scale? And does Slack and Salesforce have sufficient users anyway?


The Intelligence Broker — model integrity, borrowed context


One player appears to be hoping that intelligence can still decide outcomes, even if others own the context. The strategy is to borrow the context instead - through connectors and standards - while holding a lead in reasoning.


Anthropic — quality is the only gravity we need


The creator of Claude is betting that enterprise context will stay distributed - across Windows, Chrome, Microsoft 365, Google Workspace, Salesforce, Slack and a multitude of other apps - and that the AI which works well everywhere will be more valuable than an AI tainted by being deeply embedded in one stack.


It wants to be open and portable. Present wherever the data ends up. And, by being the best, be the one chosen by users to do the jobs they need doing. Rather than the AI being enforced by the platform.


Currently, therefore, Anthropic is less concerned about launching grand features which ultimately aim to own all context. Instead, it is busily making itself more useful.


A memory feature being rolled out to all paid users, for instance, has a unique twist: it's designed to solve contextual confusion by creating distinct memory spaces. A user can now keep context for Job A separate from Job B - which can be a major pain point. Anthropic is also rolling out a Skills feature, which are ‘little packages of context’ the model can call on efficiently.


This focus on model quality is now very visible in how Claude shows up in other apps. Inside Slack, Claude can summarise long threads, extract decisions and write updates. It can also - through Slack’s Model Context Protocol server - reach into messages and files with consent. The results of these conversations are often more pleasing - and informative - than those with other models.


The same pattern extends across Microsoft and Google platforms - Claude is invited into Outlook and Drive through connectors. And it’s in an increasing number of other places too. Anthropic believes in bringing the model to where the work is. It views digital intelligence as a very portable commodity.


This strategy has already worked. Anthropic has arguably captured the context of millions of developers. Application requirements are debated in Slack. The code lives in GitHub, And documentation will be found in Notion. Which means no single platform sees the whole picture.


Except that Claude sits across that sprawl. It can read the thread where the requirement changed, fetch the repo to understand the module, open the ticket to confirm acceptance criteria, run tests in the terminal - and carry the intent through the steps without forcing the developer to restate context every ten minutes.


The risk is obvious. Anthropic is renting its land. It owns nothing. Access can be revoked. It even seems to be leaning into this neutral position by building features that allow users to import memories from ChatGPT and export them from Claude, actively designing against memory lock-in. And what if Claude falls behind? The ‘open and portable but ultimately everywhere’ strategy only works if model quality is meaningfully better and stays that way.


Placed on our map, Anthropic is the adept intelligence broker with a borrowed spine. Will this be sufficient to keep it in the game?


The Edge Champion — privacy as architecture


One player is well-placed to serve the contexts that matter so much they should not leave enterprise facilities.


Apple — on-device for regulated work


Apple’s position is that the most sensitive context should not touch public clouds. Processing on device - with private cloud constructs for scale and proofs of non-retention - is designed to satisfy sovereignty and audit rather than to harvest broad collaborative context. This is a strong fit for regulated and high-sensitivity estates where risk tolerance is low and governance must be architected and demonstrated, rather than promised.


The trade-off is clear — superb alignment to compliance at the level of the network of air-gapped devices, less visibility across teams where shared memory and real-time collaboration drive value. Will a network-centric design limit collaborative intelligence? Or will the experience from compliance-critical segments inspire others to move their AI inference on-device and network to share?


The Infrastructure Layer: Switzerland as Strategy


Beneath the platform battles, one player pursues neutrality as competitive advantage, betting that multi-vendor enterprise strategies create opportunity for whoever provides the infrastructure beneath competing AIs.


AWS isn't trying to win context access. Amazon Bedrock AgentCore provides the operational infrastructure - runtime, memory, identity, observability, gateway services - that AI agents need regardless of which models they run or which contexts they access. The thesis: if enterprises adopt multi-cloud, multi-model strategies to avoid vendor lock-in, whoever provides model-agnostic, framework-agnostic infrastructure becomes essential regardless of which context strategy wins.


AWS brings 32% cloud market share, regulatory compliance across virtually every industry, and global data centre presence.


Does infrastructure neutrality create sustainable advantage when competitors control context, or does lacking proprietary context relegate AWS to commodity provider as model quality converges?


The Open-Source Insurgency: Free as Disruption


Beyond the named players, an insurgency is undermining the entire proprietary value proposition. Meta's LLaMA, Mistral's models, Falcon, and dozens of increasingly capable open-source LLMs are approaching proprietary quality whilst costing nothing for inference and running entirely on-premises.


If capable models are free to download and run locally, paying $20 per million tokens for API access becomes unjustifiable except where proprietary models maintain clear advantages. For regulated industries where context legally cannot leave facilities, open source isn't a choice - it's architectural necessity.


Open source brings zero marginal cost, complete data sovereignty, total customisation freedom, and geopolitical independence. As deployment friction decreases and model capabilities increase, does the open-source insurgency commoditise intelligence, leaving only context ownership as defensible moat?


The Regulatory Wildcard: Brussels Reshapes Everything


Whilst vendors fight over architecture and business models, regulatory forces are restructuring the playing field in ways most aren't adequately addressing.


The EU AI Act, data sovereignty mandates proliferating globally, and digital sovereignty initiatives in Europe and Asia create compliance requirements that favour some architectural choices whilst prohibiting others.


The forces we'll examine: how regulation fragments unified global strategies, which architectural approaches comply with data sovereignty requirements, whether regulatory advantage translates to market position, and what happens when legal compliance becomes a selection criterion more important than technical capability.


Five Questions to Navigate Complexity


As we examine each player's strategy, keep five questions in mind:


1. What context does this player actually control? Claimed context access and actual visibility differ profoundly. We'll examine what each player sees natively versus what they must request through APIs or partnerships, because dependency on competitors for context access determines negotiating leverage and strategic vulnerability.


2. Where is the architecture fundamentally misaligned with requirements? Every approach optimises for something whilst compromising elsewhere. Cloud aggregation maximises intelligence but creates regulatory barriers. On-device processing ensures compliance but limits collaborative context. No architecture serves all use cases - we'll identify the mismatches.


3. What must users accept for this strategy to work? Platform strategies require acceptance - of privacy trade-offs, of vendor dependency, of switching costs. We'll examine what each player asks enterprises to trust, and whether those asks are realistic given organisational, regulatory, and competitive dynamics.


4. Who gets excluded by design? Architectural choices inherently exclude segments. Cloud-first strategies lose regulated industries where data cannot leave facilities. On-device approaches lose collaborative enterprises where institutional memory matters. We'll map which segments each strategy cannot serve regardless of execution quality.


5. What does the player control that competitors cannot replicate? Sustainable advantages come from assets competitors cannot easily reproduce - installed base, regulatory compliance, context access, distribution, or technical capabilities with lengthy development timelines. We'll separate defensible moats from temporary leads.


The battlefield is complex, the contestants formidable and the stakes measured in trillions. Next we use this map to traverse the terrain — starting with Microsoft.


 
 
BB White and Orange.png
Get in touch bubble roll.png
Get in touch bubble.png
Button overlay.jpg

Home

Further reading

Careers

Contact us

BB White and Orange.png
bottom of page