

TradingKey - Google is poised to disrupt the AI power hierarchy, armed with its Gemini 3 model and Tensor Processing Units (TPUs). This challenges the established dominance of OpenAI in large model software and Nvidia in hardware.
As November concluded, capital markets began pricing in this "new AI world" led by Google. Amid a volatile month for U.S. equities, Google's shares surged about 14%, while its TPU partner Broadcom gained 9%. In contrast, the Nasdaq index fell 1.5%, Nvidia dropped 13%, and Oracle, burdened by significant AI investments, plummeted 23%.
Google's "software-hardware integration" stands as a powerful catalyst for its re-evaluation. Its newly released Gemini 3 Pro model has for the first time claimed the top spot in global model rankings. Furthermore, the fact that this model was trained entirely on Google's self-developed ASIC chips has prompted a deep re-examination of Nvidia's leadership position.
TradingKey analyst Mario Ma noted that while the market fixates on Nvidia's 75% gross margin and the scramble for its Blackwell chips, it overlooks a crucial detail: Google is the sole player in the AI race possessing a complete, independent software and hardware stack, thereby avoiding the "Nvidia tax."
Although Google was an early pioneer in large language models, it hasn't garnered the unprecedented attention seen by OpenAI, or even the buzz generated by AI startups like Anthropic. Conversely, lingering concerns that AI technology could disrupt Google's dominant position in search haven't fully dissipated.
After three years of OpenAI's reign in large AI models, Google's Gemini 3 model is now shattering the "ChatGPT myth." Thomas Wolf, co-founder of open-source startup Hugging Face, stated that this marks a stark contrast to the world two years ago when OpenAI held a commanding lead, signaling a completely new landscape.
Marc Benioff, CEO of Salesforce, who had used ChatGPT almost daily for the past three years, revealed that after just two hours with the Gemini 3 model, he found he "couldn't go back."
“The leap is insane . . . It feels like the world just changed, again."
In contrast to Google's methodical approach of exploring Mixture of Experts (MoE) architecture to steadily boost model training efficiency, OpenAI's ambitious "AGI" goal faces sustainability questions on its expansion path. This includes rapidly launching diverse product portfolios and forging massive, high-risk partnership agreements with multiple tech giants.
A Silicon Valley venture capitalist commented that OpenAI is becoming too fragmented, and "they can't do everything well."
In essence, Google's self-developed TPUs are specialized chips custom-built for large-scale AI tensor computation, offering superior energy efficiency and cost per unit. Nvidia's GPUs, conversely, are more general-purpose parallel accelerators, boasting greater versatility, a more mature ecosystem, and compatibility with a broader range of frameworks and scenarios.
TPUs, optimized for tensor computation, boost cost-effectiveness for inference tasks fourfold. By eliminating non-AI customized features like graphics rendering, the TPU V6 chip achieves a superior energy efficiency ratio compared to Nvidia's H200 GPU. Furthermore, tight integration with Google Cloud positions Google's product portfolio as an attractive option for smaller enterprises.
Google's TPUs have already garnered support from Safe Superintelligence, Salesforce, Midjourney, and Anthropic. Moreover, Meta, one of Nvidia's largest GPU customers, is reportedly in discussions with Google to deploy TPUs in its data centers.
Analysts believe Nvidia, which currently commands 90% of the AI chip market, retains a short-term moat, as its general-purpose GPUs and Google's specialized TPUs still serve distinct use cases. However, looking long-term, when the focus on efficiency surpasses the drive for capacity expansion, Google's TPUs could emerge as a more compelling alternative to Nvidia's GPUs.