top of page
BB White and Orange.png

GOOGLE: WAGING A THREE-FRONT WAR

  • Nov 5, 2025
  • 12 min read

Updated: Apr 9

map graphic aggregators google

Might Microsoft Actually Be Losing?


Read our first post for an introduction to AI Context and why it matters. Our second post outlined the battlefield. And the third focused on Microsoft. Below is our analysis of Google.


TL;DR

Google's starting position in the context war is undeniably strong. Firstly, there’s the obvious stuff. It’s a top-tier aggregator of context because it owns the browser that most of us use - meaning it can suck in much of what happens online. Which is a lot.


It also licences the only productivity suite that truly competes with Microsoft's. And that gives it access to 3 billion people’s emails, documents, spreadsheets and calendars.


However, given Redmond’s OS-level fortress - which can theoretically see every single screen and mouse click - Google’s starting position may also appear rather inferior. And yet - this would be to miss a great deal of what’s currently happening on the battlefield.


Google is a context aggregator. But it is also actively executing a three-front war, with an aggressive build up of troops in the Infrastructure and Intelligence territories. In doing so, it is hoping to spread itself right across the map. And even if it doesn't attain total dominance in any one, it may ultimately control more territory than any other combatant.


Its latest quarterly results, $155 billion cloud backlog and an ‘arms dealer' relationship with Anthropic, all reveal that Google is building a 'Compute Moat' – a superior refinery for enterprise context.


This troop build-up elsewhere on the map - adjacent to its position as the world’s second largest context aggregator - are somewhat hidden in plain sight. But combined with its ownership of the Gemini model, they challenge Microsoft's 'models are commodities' thesis and suggest Google's seemingly weaker position may in fact be far stronger than it appears.


//


The Longer Read 👇🏼


Obviously inferior? Probably not.


Google is undeniably a Context Aggregator. It owns the browser and the only productivity suite to truly rival Microsoft. Which, right out of the gates, gives it immense leverage in the Context Wars.


Controlling Chrome – with its over 65% global market share – makes Google the gatekeeper to the web. It is the window through which almost all modern work is done; the surface where employees interact with LLMs, SaaS systems - and even Microsoft's own web apps.


Controlling Google Workspace also gives it first-party, native access to the organisational memory – the emails, documents, calendars and chats – of over three billion users.


This starting position in the context wars is an enviable one. It's a position that any other player, bar Microsoft, would kill for. And yet, when contrasted specifically with Microsoft's all-encompassing OS-based start point, we could be forgiven for swiftly concluding that Google’s position is obviously inferior.


Microsoft's play, as we have detailed, is a deeply integrated fortress. It starts at the operating system, the foundational layer beneath everything. Windows sees every file opened, every application launched and every cross-application workflow. This provides Copilot with a rich, continuous stream of 'privileged signals' from the device itself, a level of context Google simply cannot see. Google’s context is limited to the app or the browser tab.


However, this is to miss the real strategy. It is executing a far more subtle, three-front hybrid war.


Front 1: The Context Aggregator


On its home territory, Google's aggregation play is formidable. It is built on a series of tactical advantages designed to out-manoeuvre all others.


The one million token advantage


First, let’s consider Google’s raw architectural power. Gemini already offers a one million token context window, with a doubling announced and expected very soon.


This is not a trivial technical specification. Dwarfing the 400,000-token window of GPT-5 - and the 200,000-tokens of Opus 4.1, it is a core strategic advantage that makes a very different kind of aggregation play practical.


Because, when the answers to our questions about an incident are scattered across a 120-page specification document, a flurry of weekend escalations in Slack, a complaint ticket in Salesforce, and a revised budget forecast in a Microsoft Excel file stored in SharePoint, we need as much context - in other words tokens - as we can get.


So where others will fail, Gemini can ingest all those sources at once. It can deliver a complete, synthesised answer that connects why the incident happened to the impact it continues to cause - in a way that is impossible for rivals.


The 'just-connect-to-everything' strategy


Early in October, Google launched Gemini Enterprise as 'the new front door for AI in the workplace'. Cloud CEO Thomas Kurian was explicit: 'An agent is only as good as its context, so Gemini Enterprise securely connects to your company's data wherever it lives'.


This is the key. Where Microsoft's strategy is to pull all your context into its fortress, Google's is to build a universal 'passkey' that can access context wherever it is. Given Google’s vast partner ecosystem, it can suck up an enormous amount of context and treat it all as just another data source.


The updated Salesforce partnership is a perfect example of such collaborative competition. By integrating Slack's search into Gemini, Google effectively gains access to the very ‘conversational context’ that Salesforce is trying to own as its primary context moat in the AI wars. But Google is not only trying to beat Slack; it is also making itself an indispensable partner - thereby neutralising the entire threat while also aggregating another piece of context in a classic frenemy sting operation.


The potential for a 'no-code' wedge


Google is certainly not the only company with a no-code tool. There are dozens of viable options - including those from Microsoft and Anthropic. Google’s rival tool won’t create a wedge. But its go-to-market strategy, combined with its full ecosystem, just might.


Microsoft's entire model is top-down and IT-led. An employee cannot start building data-connected agents in Copilot Studio; the tool must first be provisioned and licensed by IT. Anthropic's model, meanwhile, is sandboxed. An employee can build a custom bot in the Claude.ai window, but it is disconnected from the enterprise workflow.


Google is different. As per its G Suite infiltration play, the 'wedge' is a combination of:


  1. A simple-to-use application;

  2. Natively integrated into a massive, existing productivity surface - the 3-billion-user Workspace; with

  3. A proven, viral, bottom-up go-to-market strategy.


It works because a marketing manager can build an agent to automate her own report today, without asking IT. When her agent saves the team ten hours, she shares it with others. And they adopt too. The implication being that Google-based teams will be far heavier adopters of the no-code tools available, making the Google environment richer than that of Microsoft - despite its intital advantages.


Front 2: The troop build-up in infrastructure


Whilst context aggregation is Google’s home turf, it might not be the most important part of it.


Look at the map closely and you will see a massive build up of Google troops in the Infrastructure territory – a move that directly challenges the 'model is a commodity' thesis - which Microsoft is forced to rely on.


The Compute Moat and the $155 billion backlog


Google's record-breaking $102.3 billion quarter was driven by AI. Yet the most important figure, somewhat hidden in plain sight, was not the revenue. It was the $155 billion Google Cloud backlog.


This is not to be underestimated. It represents committed future revenue – massive, multi-year, strategic contracts. It’s a of deep, structural commitment from the largest companies in the world.


What makes this figure so strategically important is its growth. The backlog grew by an astonishing 46% sequentially (quarter-over-quarter).



This is backed by metrics showing revenue from products built on Google's generative AI models grew more than 200% year-over-year, with over 70% of existing cloud customers now using its AI products.


Why’s that important? Think of it this way: If context is the raw oil, then compute is the refinery. And the backlog strongly suggests Google is leading in the war to be the leading enterprise context refinery. A potentially critical position.


Google is winning the multi-billion dollar, multi-year deals from global enterprises - banks, pharmaceutical companies and retailers - who need a high-performance engine to refine their own proprietary oil. These companies' most valuable context is not in Office documents. It is in their databases, their millions of customer service call logs, their genomic research data and their complex supply chain models.


Google’s backlog is a financial indicator of who these enterprises trust to build their refineries. It is a massive, multi-year bet by the market that Google's integrated stack – its Gemini models, its custom TPU hardware, and its Vertex AI platform – is the superior 'Context Engine'. While Microsoft is winning the war for employee context on its surfaces, Google appears to be winning the war to be the engine for high-value enterprise context.


The 'Arms Dealer' Deal with Anthropic


The masterstroke of Google's infrastructure strategy – and the most significant troop movement on the map – is the multi-billion dollar deal for Anthropic to use Google's custom-built TPU chips. This move is subtle, complex - and strategically brilliant.


At first glance, one might dismiss it. The AI industry is in a desperate scramble for compute and Nvidia's high-end GPUs are the primary bottleneck. A simple analysis would conclude that Anthropic, unable to secure enough Nvidia chips, simply settled for the 'least worst option' available – Google's TPUs.


But this analysis disappears when held up to the light.


It fails to recognise that Anthropic is not a startup with no options. It is a frontier AI lab with deep ties to Amazon Web Services (AWS), Anthropic's primary infrastructure provider. Anthropic already trains its models on AWS. It has full access to Nvidia hardware.


The Google-Anthropic deal was not, therefore, a decision forced by scarcity. It was a deliberate, strategic choice. You do not bet your company's entire future – your ability to train the next-generation model that keeps you competitive with OpenAI – on a 'least worst' option.


Instead, Anthropic is betting that Google's proprietary architecture is a superior engine.


Why does this make sense? Because at the scale of frontier models, the communication speed between tens of thousands of chips can become more important than the speed of a single chip. A large model must be split across a vast cluster. Google's TPU clusters were designed - from day one - as a single, massive, interconnected system via their proprietary Inter-Chip Interconnect (ICI). Anthropic is betting this specialised system offers better price-performance – more AI calculations per dollar and per watt – than a generic Nvidia cluster.


This deal is perhaps the single greatest third-party validation of Google's Compute Moat. And, in a brilliant strategic twist, it creates the 'arms dealer' dynamic.


Microsoft's entire 'model-agnostic' strategy relies on Anthropic as its 'Plan B' to OpenAI. It needs a healthy, competitive Anthropic to prove its ecosystem is open. Now, for Anthropic to build the very mode that Microsoft needs, it must pay tens of billions of dollars... to Google.


In other words, Microsoft's 'Plan B' is now directly funding the capital expenditure and R&D of its 'Plan A' competitor's compute moat. For Google, it’s placed Microsoft a perfect, inescapable strategic bind.


Front 3: Intelligence completes the picture


The third front of Google's war connects the first two. The context from its aggregator surfaces (front 1) and the raw power of its compute moat (front 2) are useless without a powerful engine. Unlike Microsoft, Google's ownership of its Gemini model family is a core, non-negotiable strength, not a dependency.


This front is where Google lands its most direct blow against Microsoft's entire strategic premise. Google is betting that a fully integrated system – from custom chip to frontier model to application – will outperform a bolted-on system of partners. Not least because it can use its compute moat to supercharge its own models in a virtuous cycle that competitors cannot replicate.


Which could make this co-design a critical - potentially unassailable - advantage. Google's AI researchers and its chip designers are working together to build a model that, we may assume, will run perfectly on hardware that it is specifically designed for.


We have seen examples already. With Gemini 2.5 Flash-Lite, Google did not just build a good small model. It built a model specifically to be hyper-efficient on its own TPU hardware. Which means Google can run these hyper-efficient models at a lower cost than Microsoft, which must pay an 'Nvidia tax' and an 'OpenAI tax' for every query Copilot performs.


This allows Google to embed powerful AI into its free products - like search - at a marginal cost that would bankrupt Microsoft if it tried to do the same.


Therefore, Google is attempting to prove that the model and its the compute are - when combined - strategic, non-commodity assets. There seems to be a reasonable chance that its high-performance, vertically integrated system win.


Strategic accelerants: The Art of Jujitsu


Finally, Google is supplementing its three-front war with sophisticated, asymmetric manoeuvres. It is not just meeting Microsoft's strength with its own – it’s also using jujitsu to flip the energy of the fight.


The Offensive Land-Grab


Google’s India deal is a classic jujitsu move. Microsoft's core strength is its heavyweight B2B relationship with the global enterprise, a fortress built over 40 years. Google is not launching a frontal assault on that fortress. Instead, it has made a B2C mass-market acquisition by giving away its premium AI Pro plan - valued at more than €350/year – to over 500 million users of Reliance Jio for 18 months.


Which tells us that:


  1. Google is prepared to deploy its capital and conquer new continents. The enormous profits from its core search monopoly is funding a user-acquisition campaign at scale.

  2. It’s prepared to pay to secure context and future lock-in. A user who gets 2TB of free cloud storage will fill it with their personal context – photos, documents, and backups. A user who spends 18 months with Gemini 2.5 Pro and NotebookLM as their primary creative and research partners will build habitual workflows. At the end of the 18 months, their digital lives are built on this platform, creating significant switching costs.

  3. And it is unafraid to be nimble and launch a direct competitive strike. The Jio deal is a counter to OpenAI's free ChatGPT Go offer in India and Perplexity's partnership with rival carrier Airtel. Google is hoping to use its overwhelming financial firepower to drown these competitors, preventing them from establishing a beachhead in the world's fastest-growing digital economy.


The Regulatory Leverage


An EU compliance strategy is Google's second jujitsu move. Here, Google – itself a frequent target of regulators – is using Microsoft's regulatory problems as an offensive weapon.


The shimmy is twofold: one public and collaborative, the other quiet and aggressive.


  1. The "Responsible Partner" Narrative: Google made a strategic choice to sign the EU AI Act Code of Practice. This is a public relations and lobbying masterpiece. It costs Google nothing – it would be forced to comply with the Act anyway – but it buys them a seat at the table and the appearance of being the responsible, collaborative adult in the room, especially while rivals like Meta publicly refuse. And positions Google on the same side as the regulators in the inevitable, upcoming fight over AI and context. When that fight comes, Google can present itself as the good-faith partner, while painting Microsoft as the recalcitrant monopolist.

  2. The "Offensive Weapon" Precedent: This is not a passive stance. Google is an active participant in framing Microsoft as the anti-competitive 'context hoarder'. Google's lawyers and lobbyists can, and we might will, point to two critical precedents. First, the 2020 Slack complaint that resulted in the September 2025 forced unbundling of Teams. The EU has ruled that Microsoft illegally "tied" Teams to its dominant Office suite. Google can now argue that Microsoft is doing the exact same thing with Copilot. The legal template has already been set. Second, Google's own 2024 antitrust complaint against Microsoft. Google is formally on record with the EU, arguing that Microsoft imposes a 400% price markup for enterprises that want to use their existing Windows Server licenses on a rival cloud like Google's. - helping to build a formal, documented case for the regulators that Microsoft is an anti-competitive monopolist that uses its legacy dominance to create vendor lock-in.


Conclusion: The Strategy of Optionality


What, then, is the final picture that emerges from these distinct, yet interconnected, fronts? Assessing the assembled evidence reveals a strategy that is the polar opposite of Microsoft's.


Redmond, as far as one can tell, is waging war via a single massive front. It is betting that the OS-level context fortress is the one, unassailable position that will win the war. Its entire strategy is built on the premise that this fortress is all that matters, because the models themselves – the intelligence – will inevitably become a commodity. It is an all-in, high-stakes wager that its 'home territory' will be the site of the final, decisive battle.


This is an elegant but ultimately brittle strategy. If that single premise proves false, the entire project risks failure.


Google's strategy, in contrast, is not elegant. It is a sprawling, complex, and seemingly contradictory bet on strategic optionality. It is building a company designed to win regardless of how the war evolves. It is not just a hybrid; it is a machine built to thrive in uncertainty, with three distinct paths to victory.


  • If the war is ultimately won on infrastructure – if it becomes a raw 'battle of the foundries' to see who can build the most powerful context refineries – Google is already a dominant power. Its $155 billion cloud backlog proves its enterprise engine is running at full tilt, and its 'arms dealer' relationship with Anthropic shows its compute moat is so deep that its rivals are now paying to help excavate it.

  • If the war is won on Intelligence – if Microsoft's 'commodity' thesis is proven wrong, and superior model quality is the ultimate differentiator – Google is positioned to win. Its full-stack virtuous loop - from custom TPU, to proprietary Gemini model, to the massive data-feed of search and Workspace - creates an R&D and cost-optimisation cycle that a bolted-on system of partners (Microsoft + Nvidia + OpenAI) simply cannot match.

  • If the war reverts to pure Aggregation – if all models and compute do become commoditised and the only thing that matters is the surface – Google is still a top-tier contender. Its one million-plus token window gives it a tactical edge, its 'no-code' wedge gives may offer a viral adoption path, and its control of Chrome keeps it in the fight.


Which is why Google's seemingly weaker position is deceptive. It may in fact be far stronger. And a lot less brittle.


Next time: We’ll be obsessing over OpenAI's position on the map.


 
 
BB White and Orange.png
Get in touch bubble roll.png
Get in touch bubble.png
Button overlay.jpg

Home

Further reading

Careers

Contact us

BB White and Orange.png
bottom of page