I appeared on BBC Business Today to discuss Nvidia’s position in the AI infrastructure market amid growing competition from Google, Amazon, and Microsoft. The segment focused on whether Nvidia’s claimed generational lead holds up against emerging competitors and what the proliferation of AI chips means for the broader infrastructure market.
The core narrative dominating the tech press is one of conflict: Nvidia is either winning or losing. But that binary thinking misses the point. Nvidia isn’t losing dominance; the AI infrastructure pie is simply growing large enough that multiple winners can coexist.
The $5 Trillion Question: Why Competitors Don’t Mean Displacement
The interview was timely, coming right after Nvidia hit a $5 trillion valuation. Simultaneously, we saw reports that Meta plans to use Google’s next-generation Ironwood 7 AI chips in their data centers, and both Amazon and Microsoft were developing their own proprietary silicon. For many, this looks like a direct threat to Nvidia’s market control.
My position on air was direct: this is not a zero-sum game. A deal between Google and Meta for specialized, next-generation chips doesn’t imply Nvidia is losing current-generation dominance. It implies the infrastructure category in AI is growing so aggressively that every major player wants a slice. Companies aren’t choosing between Nvidia or specialized providers; they’re choosing Nvidia and others for different, highly specialized workloads.
Nvidia is still dominating overall. They claim to be a generation ahead of rivals, and for training most large foundational models, that is likely true. Artificial Analysis recently showed NVIDIA H100/B200 achieving a significant cost-per-token advantage over Google’s current TPU v6e and AMD’s MI300X systems. Their platform, built on CUDA and decades of GPU optimization, offers unmatched breadth and compatibility. It runs every major AI model.
The Economics of Proprietary Silicon
When the BBC anchor asked what Amazon and Microsoft developing their own chips could mean, the answer is pure economics. For companies operating at the hyperscale of Google, Amazon, Microsoft, and Meta, the sheer volume of compute required makes building proprietary silicon a financially sound decision.
Infrastructure in AI is growing as a category so much that the Return on Investment on developing custom chips—like Google’s TPUs, Amazon’s Inferentia and Trainium, or Microsoft’s custom efforts—is massive. They are optimizing for their own stack, their own cloud, and their specific internal needs. This vertical integration reduces reliance on a single vendor and potentially offers better cost-per-inference or cost-per-training-hour for specific tasks.
Specialization vs. General Purpose
The distinction lies in the workload. Nvidia’s platform is the general-purpose workhorse. It’s what you use when you need maximum flexibility and power for massive training jobs. If you want to train a GPT-5 competitor, you are likely still using Nvidia hardware.
Google, however, is trying to carve out a niche for specific AI workloads. Meta using Google’s next-gen chips means they found a use case where the specialized Ironwood 7 infrastructure offered a promising solution, likely on a future cost-per-performance basis for a specialized task, even though the current generation TPU v6e is still behind H100/B200 in cost efficiency on common benchmarks.
This is not a threat to Nvidia’s overall dominance; it’s proof the market is maturing and segmenting. As I’ve noted before, having great technology doesn’t mean you must dominate every single use case. It means you find where you fit best, as discussed in the context of Google’s approach to its AI products. Google is effective because it is good enough for specific use cases while being more cost-effective in those areas.
The Future is Fragmentation, Not Failure
For companies building AI products, this fragmentation is a huge positive. It means more options, better competition, and eventually, lower costs. The days of Nvidia being the only viable option are ending, but that doesn’t mean Nvidia is in trouble. It means the market has grown large enough to support genuine competition and specialization.
Nvidia remains the incumbent with the strongest platform, dominating training workloads. Google, Amazon, and Microsoft are winning by being specialized and efficient in their own ecosystems. The key takeaway is that the AI infrastructure market is massive and growing exponentially. There is plenty of room for everybody to succeed.
Nvidia hitting a $5 trillion valuation while competitors gain ground isn’t contradictory. It’s evidence that the tide is rising for all boats in the AI infrastructure harbor.
The rise of competitors doesn’t signal Nvidia’s failure. It signals the true scale of the AI infrastructure opportunity.