How Google Is Competing in the AI Compute Race

Google is doubling down on its custom AI chips, called TPUs, as the race in large language models heats up. TPUs, or Tensor Processing Units, are specialized processors designed by Google to accelerate machine learning tasks. Unlike general-purpose GPUs, TPUs are purpose-built for AI workloads, allowing Google to train and run its models efficiently at massive scale. These chips have been a quiet cornerstone of Google’s AI strategy, powering everything from its internal models to public services like Bard and the Search Generative Experience.

Nvidia’s H100 GPU is the current workhorse of AI computing, but Google’s TPU offers an alternative approach. For some tasks, you do not necessarily need a top-of-the-line H100. Google’s latest TPU generation has demonstrated comparable throughput for large models while being dramatically more cost-efficient. In some cases, TPU deployments have been found to be four to ten times more cost-effective than GPU clusters for training large models. Two of the world’s best LLMs, Gemini 3 and Anthropic’s Claude 4.5 Opus, were trained almost entirely on TPUs, showcasing the system’s ability to deliver state-of-the-art results without leaning on Nvidia’s ecosystem.

The strategy behind TPUs is not new. Google laid the groundwork for its AI-specific silicon as far back as 2013, when internal projections revealed they would need to double their data center footprint to support future AI workloads. The result was the TPU, which entered production in 2016 and has evolved ever since. Until recently, TPUs were mostly used to support Google’s internal infrastructure and cloud customers. That is now changing. Google is beginning to sell physical TPU systems directly to outside firms, making a push to become a serious merchant silicon vendor.

On the consumer side, Google is gaining ground with its Gemini-powered search products. As shown by a Similarweb post on X in early November 2025, which tracks web traffic activity, Gemini’s share of generative AI traffic has steadily increased over the past year. While OpenAI continues to dominate overall usage, Gemini is clearly expanding its presence. This growth reflects the broader rollout of Google’s Search Generative Experience, which uses Gemini models to enhance traditional search with conversational and AI-generated results. Rather than losing ground, Google is evolving its approach and leveraging its TPU-backed infrastructure to compete more directly in the AI-first search experience.

SimilarWeb post on X, November 13th 2025

Meanwhile, Google’s TPU offering is gaining serious momentum in the enterprise market. Anthropic has become the marquee customer, deploying over a gigawatt of TPU-based compute to support its Claude models. Google is also courting other major AI players such as Meta, xAI and possibly even OpenAI. Some customers are not only training on TPUs but using them as a bargaining chip to negotiate better pricing from Nvidia, creating a performance-per-dollar advantage that affects the entire hardware supply chain.

Enterprise adoption of AI tools shows a shifting landscape. Anthropic’s Claude currently accounts for around 32 percent of enterprise LLM usage, ahead of OpenAI and Google at 25 and 20 percent respectively. Google has gained significant ground, up from single digits, but the race is far from settled. The cost structure of AI software is also different from traditional software. Hardware decisions now impact operating margins in a more direct way. That means whoever builds the most efficient infrastructure wins not just on performance, but also on business scalability.

In the end, the AI winner depends on where you look. OpenAI’s ChatGPT is still the name most consumers know. Google is asserting its strength in chips and infrastructure, now opening up TPUs to the broader market. Anthropic is surging in enterprise thanks to its model quality and infrastructure flexibility. Nvidia remains dominant, but the competition is heating up. As Google pushes to open source parts of its TPU software stack and scale external sales, it is positioning itself as the strongest rival yet to Nvidia’s AI hardware leadership. 

The age of multiple winners in AI has begun.

Next
Next

The Compute Brokerage Era